u/Any_Good_2682

The Agentic Economy has a security problem. We built Sigui—an autonomous, sub-50ms firewall fine-tuned on AMD MI300X.

The Agentic Economy has a security problem. We built Sigui—an autonomous, sub-50ms firewall fine-tuned on AMD MI300X.

AI Agents are the new users of the internet.

They are no longer just summarizing text—they are managing USDC, interacting with protocols, and making economic decisions. But here is the problem: Legacy security is too slow. You can't ask a human to "approve" a transaction that an agent needs to execute in milliseconds.

That’s why we built Sigui.

Sigui is a synchronous security oracle that acts as a real-time filter for agentic interactions. It evaluates every move before it hits the chain.

How it works (The Tech):

  1. 🧬 Graph DNA Analysis: We fine-tuned Imina Na (a Vision-Language model) on a dataset of 100,000+ real transactions. Instead of simple rules, it analyzes the topological structure of agent behavior to detect malicious patterns.
  2. ⚡ Hardware Acceleration: Security is only useful if it's fast. Running on AMD MI300X with ROCm and vLLM, we’ve pushed inference latency below 50ms.
  3. 🏛️ On-Chain Accountability: Every ALLOW, BLOCK, or ESCALATE decision is logged on the Arc L1 blockchain. We’ve built an immutable audit trail for autonomous agents.
  4. 🧠 Self-Adapting Policy: Integrated with a DAO-led governance system, the firewall updates its risk weights based on collective intelligence.

Why it matters: Without a dedicated security layer, the agentic economy is a playground for prompt injection and topological hacks. Sigui provides the "Trust Layer" that allows agents to operate at scale.

Open Source & Demo: We’ve open-sourced the entire stack, from the FastAPI gateway to the Next.js dashboard.

I'd love to discuss with the community: How do you see the evolution of security as agents become the primary users of Web3?

u/Any_Good_2682 — 5 days ago
▲ 3 r/ethereum+3 crossposts

I just generated 1,000,000 transaction graph visualizations from real Ethereum/Arbitrum/Polygon data — now training a Vision-Language model to detect DeFi attacks

Hey everyone,

I'm building Sigui, a DePIN security oracle for AI agents. Today I hit a milestone I'm proud of:

Dataset: https://huggingface.co/datasets/Ibonon/sigui-depin-1m

What's in it:

  • 1,000,000 visual transaction graph images generated from 1.87M real on-chain transactions (Ethereum, Arbitrum, Polygon)
  • Each graph is annotated with attack topology labels: DRAIN_STARMIXING_CHAINNORMAL
  • Generated in ~1h15 using 20-core parallel processing on AMD MI300X

What I'm doing with it: I'm currently fine-tuning Qwen2-VL-7B via LoRA on this dataset using AMD ROCm. The goal is a model that sees attack patterns in transaction graphs instead of relying on static rules. This will power Imina-Na V2, the vision brain of my security oracle.

If you want to try V1 right now: https://huggingface.co/Ibonon/imina_na_lora — the first vision model trained on DePIN transaction graphs. Feedback welcome.

The standard behind this: I also co-authored ERC-8259, a proposed Ethereum standard for AI Agent Identity & Threat Registry. https://ethereum-magicians.org/t/erc-8259-ai-agent-identity-threat-registry/28473 https://github.com/ibonon/ERCs

The dataset is fully open (MIT license). Would love feedback on the graph generation approach, annotation quality, or the ERC proposal.

u/Any_Good_2682 — 5 days ago

Hi everyone,

I’ve been working on a computer vision approach to a specific security problem in the "Agentic Economy": identifying malicious transaction patterns that are mathematically obfuscated but topologically distinct.

The Problem

Traditional rule-based security engines and even standard GNNs often struggle with "splitting attacks"—where a high-value transaction is fragmented into thousands of micro-transactions to bypass statistical thresholds. However, when these flows are projected as a 2D graph topology, they exhibit very specific adversarial signatures (Star patterns, centralized hubs, mixing chains).

The Approach: VLM for Graph Classification

Instead of relying on graph embeddings, I’ve experimented with a Vision-Language approach using Qwen2-VL-2B-Instruct. The intuition is that VLMs are increasingly efficient at recognizing structural relationships in 2D layouts.

Technical Specs:

Base Model: Qwen2-VL-2B-Instruct.

Fine-tuning: LoRA (r=16, alpha=32) targeting attention projections (q, k, v, o).

Dataset (Dogon-10K): I generated 10,000 synthetic transaction graph images using NetworkX and Matplotlib. The dataset covers four classes: NORMAL, DRAIN_STAR, MIXING_CHAIN, and COORDINATED_CLUSTER.

Hardware / Stack: Trained on an AMD MI300X using the ROCm stack. This was a great opportunity to stress-test PEFT/TRL on AMD hardware for vision-centric tasks.

Why VLM over GNN?

While GNNs are the standard for graph data, the "image-based" approach allowed for faster prototyping of adversarial pattern recognition without the complexity of building a custom graph auto-encoder for every new chain's schema. The VLM’s ability to interpret "visual intent" proved highly effective at distinguishing a decentralized organic ecosystem from a coordinated sybil attack.

Model & Code

The LoRA weights are available on Hugging Face for anyone interested in testing visual graph classification:

The full source code for the inference engine and the Dogon dataset generator is currently being cleaned up.

GitHub: [Under Construction]

I’m particularly interested in hearing if anyone else is using VLMs for visual anomaly detection in abstract data structures (like graphs or network logs).

reddit.com
u/Any_Good_2682 — 7 days ago
▲ 1 r/AutoGPT+1 crossposts

Hi everyone,

I’ve been working on a computer vision approach to a specific security problem in the "Agentic Economy": identifying malicious transaction patterns that are mathematically obfuscated but topologically distinct.

The Problem

Traditional rule-based security engines and even standard GNNs often struggle with "splitting attacks"—where a high-value transaction is fragmented into thousands of micro-transactions to bypass statistical thresholds. However, when these flows are projected as a 2D graph topology, they exhibit very specific adversarial signatures (Star patterns, centralized hubs, mixing chains).

The Approach: VLM for Graph Classification

Instead of relying on graph embeddings, I’ve experimented with a Vision-Language approach using Qwen2-VL-2B-Instruct. The intuition is that VLMs are increasingly efficient at recognizing structural relationships in 2D layouts.

Technical Specs:

Base Model: Qwen2-VL-2B-Instruct.

Fine-tuning: LoRA (r=16, alpha=32) targeting attention projections (q, k, v, o).

Dataset (Dogon-10K): I generated 10,000 synthetic transaction graph images using NetworkX and Matplotlib. The dataset covers four classes: NORMAL, DRAIN\_STAR, MIXING\_CHAIN, and COORDINATED\_CLUSTER.

Hardware / Stack: Trained on an AMD MI300X using the ROCm stack. This was a great opportunity to stress-test PEFT/TRL on AMD hardware for vision-centric tasks.

Why VLM over GNN?

While GNNs are the standard for graph data, the "image-based" approach allowed for faster prototyping of adversarial pattern recognition without the complexity of building a custom graph auto-encoder for every new chain's schema. The VLM’s ability to interpret "visual intent" proved highly effective at distinguishing a decentralized organic ecosystem from a coordinated sybil attack.

Model & Code

The LoRA weights are available on Hugging Face for anyone interested in testing visual graph classification:

Hugging Face:

https://huggingface.co/Ibonon/imina\\\_na\\\_lora

The full source code for the inference engine and the Dogon dataset generator is currently being cleaned up.

GitHub: \[Under Construction\]

I’m particularly interested in hearing if anyone else is using VLMs for visual anomaly detection in abstract data structures (like graphs or network logs).

reddit.com
u/Any_Good_2682 — 7 days ago

Hi everyone,

I’ve been working on a computer vision approach to a specific security problem in the "Agentic Economy": identifying malicious transaction patterns that are mathematically obfuscated but topologically distinct.

The Problem

Traditional rule-based security engines and even standard GNNs often struggle with "splitting attacks"—where a high-value transaction is fragmented into thousands of micro-transactions to bypass statistical thresholds. However, when these flows are projected as a 2D graph topology, they exhibit very specific adversarial signatures (Star patterns, centralized hubs, mixing chains).

The Approach: VLM for Graph Classification

Instead of relying on graph embeddings, I’ve experimented with a Vision-Language approach using Qwen2-VL-2B-Instruct. The intuition is that VLMs are increasingly efficient at recognizing structural relationships in 2D layouts.

Technical Specs:

Base Model: Qwen2-VL-2B-Instruct.

Fine-tuning: LoRA (r=16, alpha=32) targeting attention projections (q, k, v, o).

Dataset (Dogon-10K): I generated 10,000 synthetic transaction graph images using NetworkX and Matplotlib. The dataset covers four classes: NORMAL, DRAIN\_STAR, MIXING\_CHAIN, and COORDINATED\_CLUSTER.

Hardware / Stack: Trained on an AMD MI300X using the ROCm stack. This was a great opportunity to stress-test PEFT/TRL on AMD hardware for vision-centric tasks.

Why VLM over GNN?

While GNNs are the standard for graph data, the "image-based" approach allowed for faster prototyping of adversarial pattern recognition without the complexity of building a custom graph auto-encoder for every new chain's schema. The VLM’s ability to interpret "visual intent" proved highly effective at distinguishing a decentralized organic ecosystem from a coordinated sybil attack.

Model & Code

The LoRA weights are available on Hugging Face for anyone interested in testing visual graph classification:

Hugging Face:

https://huggingface.co/Ibonon/imina\\\_na\\\_lora

The full source code for the inference engine and the Dogon dataset generator is currently being cleaned up.

GitHub: \[Under Construction\]

I’m particularly interested in hearing if anyone else is using VLMs for visual anomaly detection in abstract data structures (like graphs or network logs).

reddit.com
u/Any_Good_2682 — 7 days ago
▲ 14 r/web3dev+6 crossposts

Hey everyone!

I’ve been working on a security layer for the Agentic Economy during a hackathon, and I just hit a major milestone.

The problem: As AI agents start handling real money, they are becoming prime targets for "drainers" and sophisticated splitting attacks that traditional rule-based security misses.

The solution: ArcWarden & Imina Na. I’ve developed a vision-language security oracle. Instead of just looking at raw data, it "sees" transaction patterns.

The Tech Stack:

  • Model: Fine-tuned Qwen2-VL (Vision-Language Model).
  • Hardware: Trained on the beast AMD MI300X (ROCm).
  • Dataset: 10,000+ transaction graph patterns (Dogon Dataset).
  • Platform: Live dashboard (Sigui) connected to the Arc Testnet.

I just pushed the trained LoRA weights to Hugging Face! 🥇

I need your feedback! I’m looking for testers and devs to check out the dashboard and tell me what you think about using Vision AI for blockchain security. Can an AI "Oracle" actually stop the next big drainer?

🔗 Check the model on Hugging Face: Ibonon/imina_na_lora 

u/Any_Good_2682 — 8 days ago

Hey r/ethereum,

I just submitted ArcWarden to a lablab.ai hackathon on Arc L1. Wanted to share what I built because the concept is a bit different from what you usually see in the agentic space.

The problem

Autonomous AI agents managing USDC wallets on blockchain have zero native security layer. A compromised agent can drain a wallet in seconds. Existing solutions cost $0.30+ per transaction — on $0.001 nano-payments, that's structurally impossible to justify economically.

What I built

ArcWarden is an autonomous security agent that charges $0.001 USDC to evaluate every transaction from another agent before it executes. It has its own Circle wallet, its own treasury, and autonomously pays its own intelligence providers (Claude API). It's not a monitoring tool bolted on the outside — it's a participant in the economy it secures.

4 simultaneous protection layers:

Behavior analysis — amount vs. agent historical average, frequency spikes, trust score

Anti-splitting — 10-minute sliding windows. An attacker fragmenting $45 into 90 micro-transactions of $0.50 gets blocked at transaction #9

Service reputation — if 3 agents report a fraudulent service, every subsequent agent is automatically protected. Collective learning, no human in the loop

Contract analysis — EVM bytecode inspection, unprotected drain functions, upgradeable proxy detection

Every decision returns ALLOW / BLOCK / ESCALATE in under 5ms.

What makes this real and not just a demo

The thing I'm most proud of: a Vyper 0.4.3 smart contract deployed on Arc testnet that immutably records every blocked attack — pattern hash, attacker address, attempted amount, risk score, triggering layer.

Contract v1 (migrated for a technical reason — the EVM selector changed when I updated the ABI from String[64] to address as first param, producing a completely different 4-byte selector that was silently rejected by the EVM) recorded 748 attacks for $1,682.92 USDC protected during testing.

The active v2 contract is fully verifiable here:

👉 https://testnet.arcscan.app/address/0x17430A67e11535466cC5f17e736D5e4643B86ba1

That's real onchain proof. Not screenshots.

The ecosystem runs in a real closed loop:

5 autonomous agents with real Circle Developer-Controlled Wallets — PayerAgent, AttackerAgent, LearnerAgent, GrayZoneAgent, MonitorAgent. They pay ArcWarden in real USDC. ArcWarden receives, evaluates, pays Claude for ambiguous cases, logs decisions on Arc. 389 onchain transactions confirmed.

The economic loop:

ArcWarden security cost: $0.001/decision

Traditional SIEM: $0.30+ per transaction

Savings: 99.7% — only viable because of Arc's near-zero fees (~$0.000003 per tx)

ArcWarden is itself an economic agent. It earns revenue, pays its own expenses, manages its own P&L, and autonomously switches operating modes (NORMAL → DEGRADED → EMERGENCY) based on its treasury balance — zero human intervention.

Bonded Oracle model

ArcWarden operates with a Guaranty Fund — it deposits USDC as collateral to prove solvency before accepting clients. This bridges the gap between anonymous agents and accountable security providers. The fund is managed via the smart contract and verifiable by anyone on ArcScan.

The honest part

The demo video was too technical. Reviewers didn't understand what they were looking at and scored 1/5 across the board. The code is solid, the presentation wasn't. Lesson learned the hard way.

Tech stack

Python / FastAPI · asyncio · web3.py · Vyper 0.4.3 · Circle DCW ×6 · x402 protocol · Next.js · SQLite · numpy · Claude API (optional escalation)

Links

🔗 GitHub: https://github.com/ibonon/Arcwarden

⛓️ Smart contract (v2 active): https://testnet.arcscan.app/address/0x17430A67e11535466cC5f17e736D5e4643B86ba1

Live demo on x= https://x.com/i/status/2047584585643425915

🏆 lablab.ai submission: https://lablab.ai/ai-hackathons/nano-payments-arc/omni/arcwarden-autonomous-security-oracle

Feedback welcome — especially on the Risk Engine architecture and the Oracle economic model.

Solo build · Ouagadougou, Burkina Faso · 5 days

reddit.com
u/Any_Good_2682 — 18 days ago