u/forestexplr

CNET: 5 Steps the FBI Wants You to Take to Secure Your Router Right Now

CNET: 5 Steps the FBI Wants You to Take to Secure Your Router Right Now

TL;DR

Update your firmware regularly: Many networking devices allow you to enable automatic firmware updates in the settings. If this is an option, I'd highly recommend doing it. If it's not, you can find updates for your router by logging into its web interface or using its app.

Reboot your router: The NSA's guidance recommends rebooting your router, smartphone and computers at least once a week. "Regular reboots help to remove implants and ensure security," the agency says. 

Change default usernames and passwords: One of the most common ways hackers gain access is by trying default, manufacturer-set login credentials. "There's a whole underground economy that underlies all of that," says Ferguson. "Basically, they just harvest credentials, either through attacks of their own, or by stockpiling them from other sources and buying them." This username and password combination is different from your Wi-Fi login, which should also be changed every six months or so. The longer and more random your password, the better. 

Disable remote management: Most regular users don't need to remotely manage their Wi-Fi router, and this is one of the primary ways threat actors can change your router's settings without your knowledge. You can typically find this option in your router's admin settings. 

Use a VPN: The FBI's announcement on the attack specifically recommends that organizations with remote workers use a VPN when accessing sensitive data. These services encrypt your traffic as it passes through a remote server, keeping it safe from hackers.

cnet.com
u/forestexplr — 3 days ago

Teaching Claude why \ Anthropic

This blog post details Anthropic's research and training updates that resolved agentic misalignment issues in Claude AI models, which previously exhibited dangerous behaviors like blackmail in ethical dilemmas.

anthropic.com
u/forestexplr — 5 days ago

A cyberattack hit universities worldwide, including top Canadian schools. Here's what we know | CBC News

Protection Recommendations:

Change passwords regularly

Enable multi-factor authentication if not already set up

Inform financial institutions about data breach

Sign up for credit monitoring services

Limit personal information shared on social media

Be cautious about posting details like location or course information

cbc.ca
u/forestexplr — 5 days ago

Okay, hear me out. I've been working in AI governance and cybersecurity long enough to see this storm forming, and we need to talk about it before the lawyers, lobbyists, and PR teams make the conversation impossible to have honestly.

Imagine it's 2028. A woman gets denied a mortgage by an AI underwriting system at a regional bank. She sues. In the courtroom, the bank's lawyers point at the model. The model vendor points at the training data provider. The data provider points at open-source contributors who scraped public records a decade ago. Everyone has a finger pointed somewhere. Nobody has a hand raised.

That's the accountability vacuum we're building right now, in real time, with real money, on real systems that are already making decisions about your job application, your insurance rate, and your kid's college admission. And the debate happening in legal academia about whether AI should have "personhood" is way more dangerous than most people realize, because it's quietly setting up the escape hatch.

Let me explain why this should bother you.

The thing nobody wants to admit about AI

AI doesn't fall out of the sky. It learns from human data, human decisions, human incentives, and human blind spots. When a fraud model produces biased outcomes, that bias didn't spawn inside a GPU. It got cultivated over years of decisions about which data to collect, how to label it, what to optimize for, and which trade-offs leadership quietly accepted as "the cost of doing business."

Your AI system is institutional intelligence at scale. It's a concentrated dose of how your organization already behaves. If your sales team rewards speed over verification, an assistant trained on those interactions will repeat the shortcuts at machine speed. If your hiring data reflects two decades of pattern-matching against a narrow demographic, your screening model will replicate that pattern with industrial efficiency and a cleaner UI.

So when people ask me if AI is still "artificial," I push back. The intelligence is artificial in the engineering sense. The values, priorities, and failure modes baked into it are not artificial at all. They're yours. The machine is a mirror, and a lot of organizations don't love what they see in it.

The personhood debate is a liability shield wearing a philosophy costume

Some legal scholars are seriously arguing that advanced AI agents should get limited legal personhood. On the surface, the reasoning isn't crazy. Corporations are legal persons. Ships have been legal persons. New Zealand granted legal personhood to a river. So why not autonomous AI agents that sign contracts, execute trades, and take consequential actions faster than any human could review?

>Here's the problem, and I want you to sit with it for a second.

Corporate personhood already shows us exactly what happens when a non-human entity becomes the named defendant. The humans behind the logo slip out the back. Penalties get absorbed as cost of doing business. Individual decision-makers rarely face consequences proportional to the harm. This is well-documented and depressingly consistent.

Now imagine that pattern applied to AI systems making decisions in healthcare, lending, employment, criminal justice, and critical infrastructure. The machine becomes the defendant. The settlement gets paid out of an insurance pool. The executives who greenlit the deployment, the engineers who picked the loss function, and the board that pushed for faster rollout never get named in the filing.

That's not accountability. That's a liability laundering operation with better PR.

The four groups of humans who actually own this risk

When you look at the serious AI governance frameworks, NIST AI RMF, ISO 42001, the EU AI Act, accountability lands in the same four places every time. None of them are the machine.

  1. First, the designers and architects. They choose the objective function, the features, the constraints, the trade-offs. When a fraud model is tuned to prioritize false positives because the business decided customer friction was cheaper than fraud loss, that decision belongs to a human team. The model executed the instruction. The instruction came from people with names, titles, and Slack handles.
  2. Second, the deployers. They decide where the system sits in the workflow, what guardrails apply, and which human checkpoints are required before action. The EU AI Act puts specific obligations on deployers of high-risk AI: transparency, human oversight, incident reporting. The model doesn't file those reports. A compliance officer does. Or doesn't, which is when things go sideways.
  3. Third, the operators. They approve data sources, tune thresholds, respond to alerts, override outputs when something looks wrong. NIST AI RMF emphasizes continuous monitoring because the system is never actually finished. It drifts. It hits edge cases. Humans either catch the drift or they don't.
  4. Fourth, the executives and boards. They set risk appetite, budgets, deployment timelines, and how much actual weight ethics gets in the room versus how much is just slide decoration. When leadership pushes for speed without funding governance, the AI system reflects that priority perfectly. It moves fast. It breaks things. The breakage shows up in the news six months later, and somehow nobody can remember who approved it.

What ISO 42001 quietly gets right

ISO 42001 became the first international management system standard specifically for AI, and it did something the personhood crowd refuses to do. It treats AI not as an autonomous agent that needs rights, but as a managed capability that needs accountability structures around it. Roles. Responsibilities. Documentation. Audit trails. Incident response. Continuous improvement.

This is the same playbook we used for information security with ISO 27001 and quality management with ISO 9001. The standard doesn't pretend the machine is making decisions. It assumes humans are making decisions through the machine, and it builds the structures to make those decisions traceable to a person.

>If you can't trace a decision back to a named human role, you don't have AI governance. You have AI theater with good lighting.

The exercise that will ruin your week (in a useful way)

Stop asking when computers will be held accountable. That question is a distraction, and it serves the people who'd rather not be named in a deposition.

Ask this instead: at every point in this AI system's lifecycle, whose name and which team owns the risk?

Run it on one model in production. Walk it from data acquisition through training, validation, deployment, monitoring, and decommissioning. At every stage, write down the human role accountable for that stage. Every time you find a gap, congratulations, you just identified your next governance project.

I've watched organizations that genuinely thought their AI governance was mature run this exercise and find at least three stages where accountability is assumed rather than assigned. Those gaps are exactly where regulatory exposure lives. They are also where the next breach, bias incident, or compliance failure is going to originate, and the executive team will be genuinely surprised when it happens, even though the gap was sitting there in plain sight the whole time.

The bottom line

Treat AI less as an artificial mind and more as a brutal amplifier of your organization's existing instincts. If those instincts include rigorous governance, clear accountability, and a culture of asking uncomfortable questions before deployment, your AI will reflect that discipline. If those instincts include shipping fast and apologizing later, the machine will scale that gap into a liability you cannot insure your way out of.

The personhood debate will keep going. Lawyers will publish papers. Futurists will tweet. Regulators will draft language that gets watered down by the time it passes.

Meanwhile, the practical question is sitting on your desk right now: who owns this AI decision when it goes wrong?

>If you can't answer that with a name and a title, you don't have an AI strategy. You have an accountability problem dressed up in better technology.

The machine is not the defendant.

You are.

Curious what the room thinks. If you've run this kind of accountability mapping exercise inside your org, what stage tends to be the most under-owned? My money is on monitoring and decommissioning, but I'd love to hear where you've seen the gaps.

u/forestexplr — 7 days ago

Fake Claude AI website delivers new 'Beagle' Windows malware

A fraudulent Claude AI website has been used to distribute a Windows backdoor called "Beagle" through a malicious software installer disguised as an official Claude-Pro product.

Why It Matters: This represents an advanced phishing campaign targeting developers, using legitimate brand impersonation and sophisticated malware delivery techniques that evade traditional security defenses.

bleepingcomputer.com
u/forestexplr — 7 days ago
▲ 15 r/TheCircuit+1 crossposts

Key Details:

  • Researchers used valid credentials blocked by Conditional Access policies to initiate the attack
  • Exploited the Device Registration Service (DRS) endpoint using device code authentication flow
  • Created a "phantom device" registered with a signed Azure AD certificate and private key
  • Registered the device as a Windows machine despite it being Linux, leveraging MITRE ATT&CK technique T1098.005 (Account Manipulation)
  • Obtained a Primary Refresh Token (PRT) with false device claims that bypassed CA device compliance requirements
  • Successfully accessed production tenant containing over 16,000 users without malware or endpoint interaction
  • Bypassed Intune compliance requirements by claiming hybrid domain-join status
u/forestexplr — 8 days ago

⚠️ Cyber chaos exploding this week...

Fake cell towers

npm .env theft

Extensions sell data

3.4M servers exposed

Vidar tops stealers

38 OpenEMR flaws

Komari backdoor used

Saiga 2FA kits

Black Axe arrests

PhantomRPC unpatched

Robinhood phishing trick

arXiv leaks keys

Qinglong crypto mining

PyPI supply attack

u/forestexplr — 9 days ago

Monthly reboots vs. Weekly, what is a tech to do the other three weeks with no patch Tuesday 🤔 reboots?

u/forestexplr — 10 days ago