u/SnooEpiphanies6878

https://www.lelibrepenseur.org/samuel-hassine-patron-de-filigran-ecarte-du-voyage-de-macron-pour-pedopornographie/

New elite pedophilia scandal: a brutal fall. Samuel Hassine , founder and CEO of Filigran , a leading French cybersecurity company, was urgently removed from the delegation accompanying Macron to Asia. Aged 39, he is suspected of having purchased child pornography images and videos on the Darknet using cryptocurrencies.

Filigran, founded in 2022, develops cyber threat intelligence and attack simulation tools. The company is used by over 6,000 organizations worldwide, including the FBI , the European Commission, and several US agencies. Hassine, a former ANSSI employee, had raised tens of millions of euros and positioned his startup as a flagship of the French Tech scene.

According to reports, he is among the twenty buyers identified in France in a vast European investigation into a clandestine Darknet platform. Investigators have arrested several suspects for possessing and acquiring particularly serious child pornography. The Élysée Palace reacted swiftly by excluding him from the presidential trip to Japan and South Korea.

This scandal has once again tarnished the reputations of the French Tech scene and those in power. A rising figure in cybersecurity, with close ties to government institutions, finds himself at the heart of a sordid criminal case. The investigation is ongoing. 

additional Links
https://www.leparisien.fr/faits-divers/pedopornographie-un-patron-de-la-french-tech-prevu-dans-la-delegation-demmanuel-macron-en-asie-mis-en-cause-apres-un-vaste-coup-de-filet-03-04-2026-CULCDDQMQNFB5NQQ4WXV2UHEPQ.php (pay walled)
https://x.com/BastionMediaFR/status/2040111909546938799
https://www.instagram.com/p/DWth0BhiGhS/
https://geopolintel.fr/article4521.html

u/SnooEpiphanies6878 — 9 days ago

RCE in LiteLLM (CVE-2026-42208): How Two Vulnerabilities and 36 Hours Turn an AI Gateway into a Backdoor

Adversaries are increasingly targeting exposed AI middleware. LiteLLM, one of the most widely used open-source AI gateways, acts as an intermediary between an organization's applications and upstream model providers such as OpenAI, Anthropic, Bedrock, Vertex, Azure OpenAI, and many others. It translates calls into a unified interface, enforces rate limits, logs usage, and manages traffic routing.

Three properties make it an unusually high-value target:

  1. It holds the keys to every model provider in the stack. A compromised LiteLLM proxy means stolen OpenAI, Anthropic, and cloud provider credentials — often with elevated quotas attached.
  2. It logs prompts and responses by default. That includes whatever sensitive data the application is shipping into the model: customer PII, internal documents, code, credentials pasted into copilots.
  3. It is overwhelmingly deployed to the internet. Most teams stand it up as a proxy at the edge so internal services and partner integrations can hit it. In production scans we routinely see LiteLLM instances answering on public IPs.
reddit.com
u/SnooEpiphanies6878 — 10 days ago

This guidance was co-authored by the Australian Signals Directorate’s Australian Cyber Security Centre (ASD’s ACSC), the United States Cybersecurity and Infrastructure Security Agency (CISA) and National Security Agency (NSA), the Canadian Centre for Cyber Security (Cyber Centre), the New Zealand National Cyber Security Centre (NCSC-NZ) and the United Kingdom National Cyber Security Centre (NCSC-UK). Throughout this guidance, these organisations are referred to as the ‘authoring agencies’. This guidance discusses key cyber security challenges and risks associated with the introduction of agentic AI into IT environments, as well as best practices for securing agentic AI systems.

The authoring agencies strongly recommend aligning agentic AI risks and mitigation strategies with your organisation’s existing security model and risk posture. The authoring agencies further recommend adopting agentic AI with security in mind, assessing its use and never granting it broad or unrestricted access, especially to sensitive data or critical systems. Additionally, organisations should only use agentic AI for low-risk and non-sensitive tasks.

Scope and audience

This guidance primarily focuses on large language model (LLM)-based agentic AI systems. It considers both threats to and vulnerabilities within agentic AI systems, as well as risks arising from agentic AI behaviour. This includes risks introduced through system components, integrations and downstream use.

The authoring agencies developed this guidance to support government, critical infrastructure and industry stakeholders in understanding the key security challenges and risks posed by agentic AI. It provides practical guidance to help organisations that design, develop, deploy and operate agentic AI systems, to make informed risk assessments and mitigations. The guidance concludes with actionable recommendations to help organisations prepare for and defend against emerging and future agentic AI threats.

cyber.gov.au
u/SnooEpiphanies6878 — 12 days ago