FURO · v0.7 · BUILD 2026.04
MATTE BLACK
TEMP · 2700K
N 37° 46′ · W 122° 25′
STRUCTURED MEMORY
Structured memory for agents

Exact answers. Not relevant-ish matches.

FURO turns the things agents read, hear, and write into typed records your team — and your models — can query by field, not by keyword.

Coming soon See how it works
02 mechanism · how input becomes memory

Raw conversation, typed on arrival.

Every call, email, doc, or tool output lands in FURO and crystallizes into a lattice of typed records — companies, people, deals, parts, orders. No prompt tricks. No per-app glue code.

01 · crystallize

Input arrives as a cloud. It leaves as structure.

Transcripts, tickets, call notes — whatever comes in is parsed into entities, relations, and fields the moment it lands. No re-ingestion, no nightly jobs.

  • entities / record12.4 avg
  • fields / entity8.7 avg
  • ingest latency (p50)240ms
02 · canonicalize

"14.2 meq/100g" and "14.2 milliequivalents" are the same value.

Units normalize. Dates snap to ISO. Entities resolve against your canonical lists. By the time your agent queries, there's one right answer for the same fact.

  • unit taxonomy312 canonical · 2,520 with prefixes
  • entity resolverrules + LLM (no embedding noise)
  • disambiguation acc.100% on 77 eval cases
03 · shared node

One entity. Every memory it appears in.

The same customer, part, or record links across CRM notes, ERP orders, field-service jobs, marketing campaigns. Query the node — get everywhere it's been.

  • link types14 builtin
  • cross-source graphbidirectional
  • dedup strategyalias-based · canonical record carries every reference
03 differentiator · typed recall

Vector search returns relevant-ish chunks.
Typed recall returns the answer.

Below, the same question runs two ways. Vector search surfaces keyword-adjacent noise from unrelated records. FURO dereferences the typed field on the right record — every time, with the right scope.

04 recall · how you ask

Ask a question, not a search.

Typed memory is aggregateable. Ask for rollups, filters, pivots, counts — the same way you'd ask a person who knows your business cold. Get a number, not a list of links.

05 · aggregate

523 deals, one question.

Vector search caps out at "top-k chunks." Structured memory rolls, pivots, and compares across thousands of records — with the same interface a person uses.

  • pivot depthunlimited
  • window functionssupported
  • response time (p95)< 900ms
06 · askable

Ask once. Get the number.

Rolling soil pH? ARR booked this week? Open blockers by stage? FURO returns a value — typed, cited, sourced — not a list of loose links for you to sift.

  • answer typessummary + 7 chart/table types
  • provenanceper-field
  • format guaranteeschema-checked
The substrate doesn't care whether the record is a sales deal, a soil sample, or a repair ticket.
The shape does.
— FURO design principle
05 benchmark · LongMemEval-S

Not a pitch. A benchmark.

LongMemEval-S is the standard for long-term conversational memory: 500 questions spanning single-session recall, cross-session aggregation, temporal reasoning, preference retrieval, and knowledge updates. We run end-to-end ingestion, retrieval, and answer generation, graded by LLM-as-judge using the benchmark's official prompts.

01 · QA accuracy
92.8%
End-to-end question answering across all six question categories, LLM judged with the benchmark's official prompts. Above published SOTA (Supermemory at 81.6%).
+11.2 vs SOTA
02 · Retrieval R@5
99.1%
The answer-bearing session appears in our top-5 results on 466 of 470 cases. Ahead of MemPalace's published 96.6% raw and 98.4% tuned hybrid.
+0.7 vs best
03 · Preference retrieval
100%
"Recommend a hotel for Miami based on what you know about me" — the category competitors all bottom out on. We lead by 30+ points.
+30 vs SOTA
System Overall SSU SSA SSP KU TR MS R@5
Full context (gpt-4o) 60.2 81.4 94.6 20.0 78.2 45.1 44.3
Zep 71.2 92.9 80.4 56.7 83.3 62.4 57.9
Supermemory 81.6 97.1 96.4 70.0 88.5 76.7 71.4
MemPalace (raw) 96.6
MemPalace (hybrid v4) 98.4
FURO 92.8 96.9 100 100 90.0 90.0 90.0 99.1
SSU — single-session user · SSA — single-session assistant · SSP — single-session preference · KU — knowledge update · TR — temporal reasoning · MS — multi-session longmemeval_s.json · QA: Claude Sonnet 4.6 · judge: gpt-4o with benchmark's official prompts (matches Supermemory's grading setup) · retrieval: fused + LLM rerank · Qwen3-Embedding-8B with 4B fallback. Overall is the per-category weighted average over the 470 non-abstention cases.
06 your business · shaped

Your workspace, as structured data.

Point FURO at your domain and it shapes itself around your vocabulary. The sales team gets deals, attendees, competitors. Agronomy gets blocks, samples, treatments. Same substrate, different schema.

07 · example · sales

A deal card, written by the agent.

A 32-minute call becomes a typed deal card — prospect, stage, close date, owner, blockers, competitors, attendees — all addressable, all queryable, all pivotable.

  • fields extracteddeal: 18 · call: 42
  • entity linkscompany · person · competitor
  • confidence per fieldcited
08 · workspaces

Same substrate. Every business shape.

We don't ship industry SKUs. We ship a memory system. Your schema — tables, fields, relations — is how FURO knows what your business looks like.

  • schema definitionui · inferred
  • field types19 builtin + custom entity types
  • multi-workspaceshared entities, scoped views
07 shipping path · four steps

From signup to addressable memory, in an afternoon.

No infra to stand up, no vector store to babysit. Connect your sources, describe your shape, and your agents have structured memory the next time they run.

STEP 01

Describe your shape

Define the entities and fields your business runs on. YAML, UI, or inferred from samples.

5 min
STEP 02

Connect your sources

Calls, emails, docs, tool outputs, Slack threads. Ingest flows into the shape.

one-click
STEP 03

Point agents at it

A typed memory tool your agent calls by field. No RAG rewrites, no chunking strategy to babysit.

3 lines
STEP 04

Ask, pivot, roll up

People and agents query the same substrate. Answers cite their source — per-field.

live