r/StallmanWasRight

🔥 Hot ▲ 1.2k r/StallmanWasRight+1 crossposts

Just had a heart attack. Is email not an option anymore?

Beside the rant, does somebody know what oidc is, and if it is a trustworthy option? Thank you in advance!

u/Nmx_10 — 7 days ago
▲ 26 r/StallmanWasRight+1 crossposts

Your car is the most expensive tracking device you own

​The recent 2026 privacy probe into connected vehicles proves that your car is now a smartphone on wheels that you cannot turn off. Manufacturers are using always on microphones and internal cameras to build profiles on your behaviour. This data is often sold to insurance companies to adjust your rates or shared with third party advertisers without your clear consent.

​You can put your phone in a drawer or use a VPN on your laptop, but you cannot easily disconnect your car from the network. It is a constant leak of your private habits and locations.

At PureVPN, we believe that your movement should remain your business. It is time to demand the same privacy standards for your vehicle that you expect for your computer.  

reddit.com
u/PureVPNcom — 6 days ago
▲ 9 r/StallmanWasRight+1 crossposts

SIM Binding, Aadhar linked Mobile : Regulatory Harrasment

In Indian banking a space it has become very difficult to have a semblance of freedom, ownership and like earlier we used to have. The sim binding mandate has made the hell to use mobile apps on different devices, like we have to use on the same sim on the same mobile and we are linked to that mobile, so now it has become so difficult that it has become a pain in the **. Earlier, SBI Net-Banking was very easy to log in just simple ( they didn't treat you as a kid or toddler but gave you options to decide) use ID Password and the OTP and voila you had access. Now it treats everyone like toddler, still not able to stop Scams, and enforcement is so poor that 5000 I lost to online cheating fraud are still pending. Regulators are focusing on the easier to do regulatory "tick" things without going on the hard part vigilance and enforcement. Okay, for higher transition it might be worth it, but even for small things, these demanding mandates like sim binding? What we need is differentiated offerings which a user is able to select from and the most important KYC details should be changed offline only, everything else user should be able to decide what he needs to activate. Secondly, Now a days I am seeing for every credit card application or bank account institutions are only accepting only Aadhar linked Mobiles this is so restrictive and privacy unfirendly and security vulnerability. This is paranoid level, instead of controlling the scams, RBI is harassing the genuine consumers. The online banking flow has become so difficult, so difficult and so congestive to the people that it is not worth it. What are your views.

reddit.com
u/fiercyfire — 6 days ago
🔥 Hot ▲ 113 r/StallmanWasRight

Richard Stallman on the term “artificial intelligence”

> ### “Artificial Intelligence” > > The moral panic over ChatGPT has led to confusion because people often speak of it as “artificial intelligence.” Is ChatGPT properly described as artificial intelligence? Should we call it that? Professor Sussman of the MIT Artificial Intelligence Lab argues convincingly that we should not. > > Normally, “intelligence” means having knowledge and understanding, at least about some kinds of things. A true artificial intelligence should have some knowledge and understanding. General artificial intelligence would be able to know and understand about all sorts of things; that does not exist, but we do have systems of limited artificial intelligence which can know and understand in certain limited fields. > > By contrast, ChatGPT knows nothing and understands nothing. Its output is merely smooth babbling. Anything it states or implies about reality is fabrication (unless “fabrication” implies more understanding than that system really has). Seeking a correct answer to any real question in ChatGPT output is folly, as many have learned to their dismay. > > That is not a matter of implementation details. It is an inherent limitation due to the fundamental approach these systems use. > > Here is how we recommend using terminology for systems based on trained neural networks: > > * “Artificial intelligence” is a suitable term for systems that have understanding and knowledge within some domain, whether small or large. > * “Bullshit generators” is a suitable term for large language models (“LLMs”) such as ChatGPT, that generate smooth-sounding verbiage that appears to assert things about the world, without understanding that verbiage semantically. This conclusion has received support from the paper titled ChatGPT is bullshit by Hicks et al. (2024). > * “Generative systems” is a suitable term for systems that generate artistic works for which “truth” and “falsehood” are not applicable. > > Those three categories of jobs are mostly implemented, nowadays, with “machine learning systems.” That means they work with data consisting of many numeric values, and adjust those numbers based on “training data.” A machine learning system may be a bullshit generator, a generative system, or artificial intelligence. > > Most machine learning systems today are implemented as “neural network systems” (“NNS”), meaning that they work by simulating a network of “neurons”—highly simplified models of real nerve cells. However, there are other kinds of machine learning which work differently. > > There is a specific term for the neural-network systems that generate textual output which is plausible in terms of grammar and diction: “large language models” (“LLMs”). These systems cannot begin to grasp the meanings of their textual outputs, so they are invariably bullshit generators, never artificial intelligence. > > There are systems which use machine learning to recognize specific important patterns in data. Their output can reflect real knowledge (even if not with perfect accuracy)—for instance, whether an image of tissue from an organism shows a certain medical condition, whether an insect is a bee-eating Asian hornet, whether a toddler may be at risk of becoming autistic, or how well a certain art work matches some artist's style and habits. Scientists validate the system by comparing its judgment against experimental tests. That justifies referring to these systems as “artificial intelligence.” Likewise the systems that antisocial media use to decide what to show or recommend to a user, since the companies validate that they actually understand what will increase “user engagement,” even though that manipulation of users may be harmful to them and to society as a whole. > > Businesses and governments use similar systems to evaluate how to deal with potential clients or people accused of various things. These evaluation results are often validated carelessly and the result can be systematic injustice. But since it purports to understand, it qualifies at least as attempted artificial intelligence. > > As that example shows, artificial intelligence can be broken, or systematically biased, or work badly, just as natural intelligence can. Here we are concerned with whether specific instances fit that term, not with whether they do good or harm. > > There are also systems of artificial intelligence which solve math problems, using machine learning to explore the space of possible solutions to find a valid solution. They qualify as artificial intelligence because they test the validity of a candidate solution using rigorous mathematical methods. > > When bullshit generators output text that appears to make factual statements but describe nonexistent people, places, and things, or events that did not happen, it is fashionable to call those statements “hallucinations” or say that the system “made them up.” That fashion spreads a conceptual confusion, because it presumes that the system has some sort of understanding of the meaning of its output, and that its understanding was mistaken in a specific case. > > That presumption is false: these systems have no semantic understanding whatsoever.

gnu.org
u/WonderOlymp2 — 9 days ago
▲ 10 r/StallmanWasRight+1 crossposts

Your cursor is an accidental lie detector

AI models are no longer just looking at where you click. They are looking at how you get there. Researchers at ETH Zurich recently conducted a seven week study that found something unsettling. Machine learning models can now detect workplace stress using only mouse movement and typing rhythm. In many cases, these behavioral signals were more reliable indicators of pressure than actual heart rate monitors.

The way you move your cursor when you are stressed is distinct and measurable. You might take longer paths to reach a button or make more mid movement corrections. Your targeting precision drops and your trajectory shows more entropy. To an AI model, these are not just navigation errors. They are traces of your psychological state.

This type of monitoring is already moving from the research lab to the corporate office. In 2024, Wells Fargo dismissed employees after systems detected simulated activity designed to make them look active. Beyond the workplace, a Princeton study found that hundreds of major websites use session replay scripts to capture every scroll and movement you make. Your cursor behavior has become a form of telemetry that most people do not even realize they are broadcasting.

The transition from simple navigation to behavioral analytics represents a major shift in digital privacy. Traditional stress detection required your consent through a wearable or a survey. Now, the tools you use for work are providing that data automatically. It highlights a future where your workflow is not just a series of tasks but a constant stream of emotional and cognitive signals.

At PureVPN, we believe that your behavior is the ultimate form of personal data. As AI makes it easier for platforms to read your stress levels and decision friction through a mouse, the need for digital boundaries becomes even more critical. Your interactions should belong to you, not to the analytics systems watching your every move.

reddit.com
u/PureVPNcom — 7 days ago
▲ 3 r/StallmanWasRight+1 crossposts

When the world most dangerous AI model accidentally leaves the door open

Anthropic just proved that even the architects of the most powerful offensive cyber tools can fall victim to a simple configuration error. This week, a CMS misconfiguration exposed Claude Mythos, an AI model designed specifically to find and exploit zero day vulnerabilities autonomously.

The leak has sent shockwaves through the security industry because Mythos is not just a chatbot. It is a high reasoning system that can chain exploits together faster than any human team. Wall Street reacted almost instantly, with cybersecurity stocks dropping as investors realized that current defensive tools might already be obsolete against this level of automation.

This incident highlights a massive paradox in 2026 security. We are building hyper intelligent systems to protect us, but the weakest link remains the basic human error of a mismanaged server. If a frontier AI company can accidentally expose its own digital weapon, then every organization needs to rethink its internal access controls.

At PureVPN, we believe that as AI makes attacks faster, your defensive layer must become more invisible and persistent. Relying on a single firewall or a standard password is no longer a viable strategy when an AI like Mythos can scan for weaknesses in milliseconds. True security now requires a zero trust architecture where no connection is assumed safe, even if it comes from within your own network.

The Mythos leak is a reminder that the tools of the future are already here, and they do not wait for us to be ready. It is time to audit your configurations and ensure that your most sensitive assets are not just behind a door, but encrypted and hidden from the automated scanners of the next generation.

reddit.com
u/PureVPNcom — 10 days ago