File Display

Origin

What started out as Greco side-project has become a multi-tenant platform that ships enterprise software at scale.

Get Started

Get started with File Display in seconds. Sign up with your email, and you're automatically connected with your team. Your email domain determines your company, so everyone at your organization sees the same data.

1

Sign Up with Email

Work email, no credit card required.

2

Auto-Assigned to Company

Your email domain becomes your Client ID.

3

Collaborate with Team

Same domain = same workspace, automatically.

Email Domain = Client ID

  • john@acme.com & sarah@acme.com share one workspace
  • No invites needed — sign up and collaborate
  • Data isolation is automatic per company

Roles

FounderAdminEditorViewer

First signup = Founder. Everyone else starts as Editor. Founders/Admins can adjust.

Download File Display Desktop App

Run File Display as a native desktop application on macOS or Windows. The app bundles the full Next.js server, all pages, and every API route into a single installer. On launch, it starts a local server and opens the app — same experience as the web version, running entirely on your machine. Auto-updates keep you on the latest version.

macOS

Intel & Apple Silicon

Apple Silicon .dmgIntel .dmg
VS Code ForkmacOS 10.15+Auto-updates

Windows

64-bit (x86_64)

Setup .exeInstaller .msi
SignedWindows 10+Auto-updates

Greco & Sons — Tool Documentation

The following tools are built for Greco and Sons warehouse operations.

DockVoo

/tools/greco — Dock Operations Workspace

DockVoo is the central daily workspace for the receiving office. It manages the entire lifecycle of inbound loads from schedule upload through check-in, dock assignment, capstone breakdown, receiving, and release. The landing page presents a date selector and six cards — one for each sub-tool and a settings page. All state is persisted per-day to Supabase with localStorage fallback. Writes are authenticated through a server-side API route; reads are available to anyone via a public live view at /live/greco.

Start (Schedule Upload)

Drag-and-drop or browse for a .xlsx receiving schedule. Expected columns: Date, Time, Dock, Door, Load Number, PO Number, Carrier, Vendor, Cases, Cubes, Weight, Vendor Pallets, Warehouse Pallets, Duration (Mins), and Lumper Required.

  • Parses the first worksheet and maps headers by keyword (not fixed position).
  • If multiple dates are present, targets the second chronological date (tomorrow's schedule); otherwise uses the only date.
  • Merges rows sharing the same Load Number — sums Cases, Cubes, Weight, and Pallets; concatenates PO numbers.
  • Routes loads to Main, Rana, or Brewster by Dock text; assigns temperature zones (Dry, Cooler, Freezer) from dock prefixes.
  • Aggregates totals by location and zone; buckets loads into hourly time slots.
  • Strips numeric dock prefixes (e.g. "05 - Brewster Dry" becomes "Brewster Dry").
  • Reads Duration (Mins) and Lumper Required per load for use in the Timer and live tracker.

Schedule Workbook Output

On generate, a formatted .xlsx workbook is downloaded with these sheets:

  • Summary — every load for the target date with all parsed columns.
  • Main, Rana, Brewster — the same data split by warehouse location.
  • Daily Totals — aggregated Cases, Vendor Pallets, Warehouse Pallets, and Weight broken down by location and temperature zone.
  • Load Types — load counts by location and zone (e.g. "Main Dry: 12 loads").
  • Color Key — legend for the color-coding used throughout the workbook.
  • Case Rank — loads ranked by case count (highest first).
  • Weight Rank — loads ranked by weight (highest first).

Check-in (Live Load Tracker)

A live table showing every appointment for the day. The receiving office records timestamps as trucks arrive, dock, and leave. The tracker also surfaces lateness and fee indicators.

  • Columns: Appt Time, PO#, Dock, Vendor, Carrier, Receiving Status, Time In, Time Docked, Time Out, Total Time, Cancel, Reschedule.
  • One-click buttons to record Time In, Time Docked, and Time Out — each stamps the current time.
  • Once Time In is recorded, Cancel and Reschedule buttons disappear for that row.
  • Receiving status auto-updates from the Receiving grid: "Not started," "In progress," or "Complete."
  • Grid finish times automatically sync back to the tracker's Time Out column.
  • If a truck checks in more than 2 hours late, a red "$300 WORK-IN" badge appears on its row and a warning is logged.
  • Cancelled loads are highlighted black with red text; rescheduled loads are highlighted green with the new date.
  • An Edit button opens a modal for manually adjusting Time In, Time Docked, and Time Out.
  • Full-text search across PO, Vendor, Carrier, and Dock with yellow highlighting on matches.
  • Mobile-responsive: hides non-critical columns on small screens, tighter padding, full-height scrollable table.

Master Export Workbook

The "Export Spreadsheet" button downloads a master .xlsx workbook with the following sheets:

  • Summary List — every load with Scheduled Time, Dock, Vendor, Carrier, PO, Receiving status, Time In/Docked/Out, Total Time, Status, and Reschedule Date.
  • Cancelled — only cancelled loads (same columns).
  • Rescheduled — only rescheduled loads with the new date.
  • Detention (2hrs+) — loads where total dock time exceeded 2 hours.
  • Receivers Grid — one row per grid cell with Hour, Dock, PO, Receiver, Start, Finish, Total Time, Helper Receiver, Helper Start, Helper Duration, Solo Duration, and Combined Duration.
  • Check-in Lateness — for each checked-in truck: PO, Vendor, Dock, Appointment Time, Actual Check-in Time, Minutes Late, and Work-In Fee. Rows more than 120 minutes late are highlighted yellow with bold red text and marked "$300 WORK-IN FEE."

Receiving (Grid)

A dock-by-hour grid where receivers are assigned to POs and start/finish times are recorded. This is the core real-time view of who is receiving what, and where.

  • 17 dock columns (DOCK 6, #7, #8, #10–13, #14–15, #16–22, #26–28, RANA #1–5, RANA #8–11) by 12 hourly rows (6 AM – 5 PM).
  • Each cell has: a PO input (with datalist autocomplete), a Receiver dropdown, and Start/Finish buttons.
  • The PO dropdown only lists POs from trucks that are checked in but not yet assigned a receiver.
  • An Edit button on each cell opens a modal to manually set Start/Finish times and assign a Helper Receiver with their start time.
  • Helper receiver badges display inline when assigned (purple "+ [Name] @ [time]").
  • Three layout modes: Grid (default), List (flat table sorted by start time), and By Receiver (grouped cards per receiver).

Receiver Rankings Export

The "Export Rankings" button downloads a separate .xlsx workbook:

  • Rankings — a leaderboard sorted by fastest average time per truck: Rank, Receiver Name, Truck Count, Total Time (min), Average Time (min).
  • One sub-sheet per receiver — their individual load details: PO, Vendor, Dock, Start, Finish, Duration (min).

Capstone (Breakdown Timing)

Tracks the time from when a truck is docked until its capstone (pallet breakdown) is complete. Shows active breakdowns as cards with a running elapsed timer.

  • Automatically lists trucks where Time Docked is set but Capstone Complete is not.
  • Each card shows: elapsed timer, Vendor, PO, Dock, Time Docked, and a Capstone Employee dropdown.
  • Employee presets: Chad, Larry, Benjamin, Johnny, Chris, Hilario, George, Jay, and Other.
  • Clicking "Capstone Complete" stamps the current time and removes the truck from the active list.
  • Two layout modes: Grid (default, card-based) and Timeline (chronological list).
  • Completed breakdowns are listed at the bottom for reference.

Capstone Rankings Export

The "Export Rankings" button downloads a .xlsx workbook:

  • Rankings — leaderboard sorted by fastest average breakdown: Rank, Employee Name, Breakdown Count, Total Time (min), Average Time (min).
  • One sub-sheet per employee — their individual breakdowns: PO, Vendor, Dock, Time Docked, Capstone Complete, Duration (min).

Timer (Detention Countdown)

Displays a live countdown for every truck currently on dock (Time In set, Time Out not set). The countdown defaults to 2 hours or uses the Duration (Mins) value from the uploaded schedule.

  • Cards sorted by least time remaining (most urgent first).
  • Each card shows: countdown timer, progress bar, time allotment, Vendor, PO#, Dock, Appt Time, Time In, Receiver name, and receiving status.
  • Color coding: green (on track), amber (less than 15 min remaining), red with pulsing bar (overtime).
  • Trucks with Lumper Required are flagged with a purple badge.
  • Trucks disappear from this view when Time Out is recorded (via Check-in or the Receiving grid finish time).

Settings

The Settings tab provides two configuration areas: date management and email notifications.

  • Date Dropdown — Clear all dates from the homepage selector without deleting saved session data.

Email Notifications (PO Watchlist)

Type any PO number and receive email alerts when check-in actions happen on it. Notifications are delivered instantly via Resend.

  • Toggle notifications on/off with a master switch and set a delivery email.
  • Choose which events trigger notifications: Time In, Docked, Time Out, Cancelled, Rescheduled.
  • Type a PO# and set scope to Just Me (only you are notified) or Whole Team (all Greco members with enabled notifications receive the email).
  • PO numbers can be added before they appear in the system — when the matching PO is eventually uploaded and a check-in action occurs, the notification fires automatically.
  • The Company Notification Log shows all watched POs across all team members, who set each one, and whether the PO has been found in any saved DockVoo session (with date and vendor details if found).
  • Notifications are suppressed in Test Mode.

Public Live View

A read-only page at /live/greco that anyone can access without logging in. It polls the server every 10 seconds and mirrors the Check-in and Receiving views in a non-editable format.

  • Date selector to switch between workspaces.
  • Check-in tab: same table as the authenticated view but with no action buttons — shows timestamps, statuses, work-in badges, and total times.
  • Receiving tab: same Grid / List / By Receiver layouts, showing PO, Receiver, start/finish times, and helper assignments.
  • No Supabase credentials are exposed; data is served through a server-side API route.

Data Persistence & Security

  • Session data (tracker rows + grid state) is stored per-day in the greco_cindyvue_sessions Supabase table.
  • Row Level Security: anonymous users can only read; writes require an authenticated session belonging to the Greco company ID.
  • Writes go through /api/greco/session which validates the JWT and company membership before writing via the service role.
  • localStorage provides an offline fallback and emergency backup if the backend is unreachable.
  • Auto-save triggers 500ms after any tracker or grid change, with a live sync status indicator.

Transfers (Overstock Pro)

/transfers — Overstock Analysis & Report Card

Processes the weekly overstock inventory workbook, rebuilds it with clean value-only data, segments by building, and generates an executive Report Card highlighting actionable overstock positions.

Input

A single .xlsx workbook containing a "Total Overstock" sheet plus building-specific sheets (Hecht, Brewster, Rana).

Core Logic

  • Copies all sheets to a new workbook with formula-neutralized values.
  • Rebuilds Total Overstock by excluding RCV/RTN slots and segmenting rows by slot prefix into Hecht, Brewster, and Rana blocks.
  • Rebuilds the "By building" sheet as a frequency-style pivot grouped by building.
  • Report Card filters: rows where temp zone = D (Dry), total offsite is nonzero, difference is nonzero, and Hecht pallets are between 0 and 6. Sorts by difference descending.

Output

  • Master workbook — all rebuilt sheets in a single download.
  • Report Card — standalone file with the filtered/sorted executive view.

Forklifts

/forklift — Shift Productivity Analysis

Compares Day and Night forklift shift performance, ranking each operator by moves per hour and flagging idle time issues.

Input

Two .xlsx files — one per shift. Shift auto-detection uses DFORKL (Day) and LOPS3 (Night) identifiers.

Core Logic

  • Finds the header row dynamically by scanning the first 20 rows for "Employee" and "Direct."
  • Maps columns: Direct Hours, Indirect %, Indirect Hours, Idle, Rcv Put, Rpl, Rpl Tfr, and Tfr.
  • Total Pallets = Rcv Put + Rpl + Tfr (Rpl Tfr is explicitly excluded).
  • Total Hours = Direct Hours + Indirect Hours; Moves Per Hour = Total Pallets / Total Hours.
  • Operators are sorted by Moves Per Hour descending; conditional formatting highlights high idle and low productivity.

Output

Forklift Productivity {date}.xlsx — dated title row, Night Shift section above Day Shift, with per-operator metrics and section totals.

Pickers

/picker — Pick Performance & Short Runner Breakdown

Splits warehouse personnel into Pickers and Short Runners based on crew identifiers, calculates scan accuracy and goal attainment, and applies conditional formatting to highlight underperformers.

Input

A single .xlsx WMS export with employee rows. Blacklisted system columns are auto-removed.

Core Logic

  • Strips blacklisted WMS columns and normalizes indirect/idle fields.
  • Detects the Crew column and classifies rows as Pickers (PICKIN, PickN, NSELCT) or Short Runners (everything else).
  • Computes group-level Scan % averages for each population.
  • Applies conditional formatting to Indirect %, % to Goal, Scan %, and Idle % columns.

Output

Picker Info {date}.xlsx — Summary sheet with aggregates, a Pickers sheet, and a Short Runners sheet with dynamic columns.

Shorts

/shorts — Short Shipment Analysis

Analyzes short-shipped items across the warehouse, breaking them down by hour, identifying duplicate items, and tracking how many shorts were recovered versus written off as markouts.

Input

A single .xlsx file with columns: Dept, Item, Description, Slot, Route, Original Qty, Original Short, Qty Found, and Shorted timestamp.

Core Logic

  • Drops rows with blacklisted slot prefixes.
  • Separates markout-excluded rows (those with a Qty Found value) into their own summary.
  • Builds a duplicate pivot counting how many times each item appears.
  • Buckets remaining rows by the hour extracted from the Shorted timestamp.
  • Per-hour sheets include slot subtotals as a mini pivot (columns K–M).
  • Compares total Shorted vs Found across all hours.

Output

Shorts_{date}.xlsx — Summary, Summary Markouts Excluded, Pivot Table Duplicates, one sheet per hour bucket, and a Shorted vs Found totals sheet.

Returns

/truck-errors — Return Reason Classification

Classifies truck returns by reason code into four categories, then generates separate workbooks for driver-attributed issues versus selector-attributed issues, each with summary pivots and item-level detail.

Input

A single .xlsx file with columns: Item, Dept, Item Desc, Rtn Cd, Normal Qty, Driver, Selector, Cust No, Cust Name, Cust Order, Route, and Stop.

Core Logic

  • Parses the Return Code (Rtn Cd) by extracting text after the dash separator.
  • Maps each reason into one of four groups: Overlooked, Damage on Truck, Short on Truck, or Mispick/Refused.
  • Overlooked and Damage workbooks pivot by Driver with item-level detail; Short and Mispick workbooks pivot by Selector.
  • Summary sheets aggregate Normal Qty by the responsible person; detail sheets show every line item.

Output

  • Overlooked and Damage on Truck workbook — driver-based pivots with summary and item detail.
  • Short on Truck and Mispick-Refused workbook — selector-based pivots with summary and item detail.

Skips

/skips — Slot & UPC Skip Analysis

Separates warehouse skip events into Slot Skips (SKPSLT) and UPC Skips (SKPLUPC), ranks employees by skip count, and surfaces the top and bottom performers.

Input

A single .xlsx WMS export with employee and error type columns. Blacklisted columns are auto-removed.

Core Logic

  • Identifies the employee column (matching "generated by emp") and the error type column.
  • Filters rows into SKPSLT (Slot Skips) and SKPLUPC (UPC Skips) populations.
  • Sorts each population by employee name and counts occurrences.
  • Ranks the top 10 and bottom 10 employees by skip count for each type.

Output

Skip_Report_{date}.xlsx — Summary (counts + top/bottom leaderboards), All Data, Slot Skips, and UPC Skips sheets.

Returns+

/returns — Extended Return Categorization

A deeper return analysis that categorizes every line item into Operations Faults, Customer Issues, Spoilage, or Other, then builds per-category breakdown sheets with route/customer/item detail.

Input

A .csv or.xlsx with columns: Item, Item Desc, Rtn Cd, Normal Quantity, Customer Name, Route, Stop, Orig. Inv#, and Date.

Core Logic

  • Extracts the reason text after the dash in Rtn Cd.
  • Matches each reason against regex patterns defined in the category config to bucket into Operations Faults, Customer Issues, Spoilage, or Other.
  • Rolls up quantity by reason, customer, route, item, and date range.
  • Each category sheet includes Route, Stop, Customer, Item, Description, Qty, Reason, and Invoice columns with a quantity total row.

Output

Returns_Summary_{date}.xlsx — Summary dashboard plus per-category sheets (Operations Faults, Customer Issues, Spoilage, Other).

Cycle Count

/cyclecount — Inventory Adjustment Analyzer

Aggregates cycle count adjustment data across multiple files, computing net quantity variances per item and breaking down adjustments by user, department, and reason code.

Input

One or more .xlsx or.csv files with columns: Item, Description, Adjustment Date, Slot# Display, Adjusted By, Quantity 1, Warehouse Dept, Adjustment Code, and Memo.

Core Logic

  • Merges rows across all uploaded files into a unified dataset.
  • Computes net quantity per item (positive = gain, negative = loss).
  • Aggregates stats by department, user, and adjustment reason code.
  • Identifies the top 20 items by absolute adjustment magnitude.
  • Groups raw rows by user for per-person detail sections.

Output

Cycle_Count_Analysis_{date}.xlsx — Item Summary (net totals), Analytics Dashboard (by dept/user/code + top 20), Master Data (all rows), and User Details (per-person sections).

Driver Analytics

/drivers — Fleet Performance Dashboard

Ranks delivery drivers by efficiency, scan accuracy, and volume, producing a dashboard with fleet-level aggregates and top-10 leaderboards.

Input

A .csv or.xlsx file with flexible headers — the engine auto-detects columns for Driver Name, Efficiency, Scan %, Volume/Pieces, Hours, and Stops.

Core Logic

  • Dynamically matches column headers using fuzzy keyword search (e.g., "efficiency," "scan," "volume").
  • Computes fleet-level totals and averages across all drivers.
  • Builds top-10 leaderboards for Efficiency, Scan %, and Volume.
  • Full roster is sorted by efficiency descending with conditional scan % coloring.

Output

Driver_Performance_{date}.xlsx — Dashboard (fleet cards + three leaderboards) and Full Roster sheet.

Replen

/replen — Replenishment Goal Tracking

Joins split-percentage data with location-level replenishment detail to rank each location against a 75% day-shift goal, with color-coded performance tiers.

Input

Two .xlsx files: a Splits file (shift + customer/location + % total) and a Data file (location, crew, shift, pieces, avg day, % total).

Core Logic

  • Builds per-location day/night split percentages from the Splits file.
  • Joins split data to the detail rows from the Data file.
  • Ranking Summary sorts locations by day % descending with green/yellow/red tiers against the 75% goal.
  • Creates one detail sheet per location (sanitized name) sorted by % total.

Output

Replen_Report_{date}.xlsx — Ranking Summary sheet plus per-location detail sheets.

Reconciliation (Diffs)

/reconciliation — Credit vs Returns Matching

Cross-references a Credit file against a Returns file, matching by customer and item, to surface quantity discrepancies between the two systems.

Input

Two .xlsx files: a Credit workbook and a Returns workbook. Sheets whose name contains "summary" are skipped.

Core Logic

  • Credit file: extracts Customer (col 4), Item (col 6), Description (col 7), Qty (col 9), and PO from a "po number" header.
  • Returns file: extracts Customer (col 8), Item (col 7), Qty (col 4), Description (col 6), and PO from a "cust order" header.
  • PO normalization handles 7-digit codes with SS/RMR/FPD prefix rules.
  • Keys on customer|item; accumulates Credit (entree) and Returns (BFC) quantities; difference = entree + BFC (signed).
  • Results sorted by difference to surface the largest discrepancies first.

Output

Reconciliation_{date}.xlsx — single "Overall Summary" sheet with Customer, Item, PO #, Description, Entree Qty, BFC Qty, and Difference (conditionally colored).

Loader Analytics

/loader — Loading Dock Performance

Ranks loading dock personnel by efficiency, volume, utilization, and idle time, with conditional formatting against goal thresholds.

Input

A single .xlsx WMS export with fixed column positions for Name, Crew, TPH, Goal, Direct Hours, Utilization, Idle, Idle %, and Pieces.

Core Logic

  • Reads rows starting at row 2 using fixed column indices (name=4, crew=5, TPH=7, goal=8, etc.).
  • Goal and utilization are parsed as percentages.
  • Builds ranked leaderboards for Efficiency (goal %), Volume (pieces), Utilization, and Idle Time.
  • Dashboard applies conditional formatting: green for at/above goal, red for significantly below; similar thresholds for idle.

Output

Loader_Analytics_{date}.xlsx — Dashboard, Efficiency, Volume, Utilization, and Idle Time sheets.

Weekly Returns

/returns-intel — Week-over-Week Return Trends

Tracks return trends week over week by location, identifying the biggest movers (locations with the largest increases or decreases in returns) and surfacing top offenders by return reason.

Input

A single .xlsx file with a header row containing "BFC Client ID." Expected columns: DOW (day of week), BFC Client ID (location), Return Description (reason), CW (current week qty), LW (last week qty), and Diff.

Core Logic

  • Detects the header row by scanning the first 5 rows for "BFC Client ID."
  • Aggregates by location, day, and reason with current-week/last-week/diff metrics.
  • Computes week-over-week percentage changes per location.
  • Ranks locations by largest diff increase (biggest movers) and decrease (biggest improvers).
  • Identifies top offenders: for each return reason, which location has the worst numbers.
  • Persists rich aggregate data to the database for historical trend tracking.

Output

Returns_Intel_{date}.xlsx — Executive Summary with a "Biggest Movers (Week over Week)" table showing Status, Location, CW, LW, Diff, and % Share.

Architecture Notes

All processing runs client-side in the browser using ExcelJS — no data is uploaded to any server.
Tools are gated by GrecoOnlyGuard — only users belonging to the Greco company ID can access them.
Generated reports are stored in Supabase for audit trails, AI analysis, and email notifications.
DockVoo persists workspace state per-day to Supabase (authenticated writes via API route, anon reads via RLS) with localStorage fallback.
Every tool shares a consistent dark glass-panel UI with drag-and-drop upload, real-time console logs, and one-click Excel download.
Column detection is flexible — headers are matched by keyword rather than fixed position (except Loader, which uses WMS-fixed indices).

API & Webhooks

Integrate File Display with your existing WMS, ERP, or other systems using our comprehensive REST API. With dozens of external endpoints across many resource groups, you can submit reports, manage webhooks, run anomaly detection, perform semantic search, control the AI agent, manage custom tools, schedule recurring reports, export data in bulk, and execute batch operations — all programmatically.

Getting Started

1Generate API Key

Go to the Developer page and create either a Standard API key or an AI Bundle key, depending on the endpoints you plan to call.

2Configure Webhooks

Set up webhook endpoints to receive notifications when reports complete.

3Make API Calls

Submit reports, retrieve data, and generate AI analyses programmatically.

Go to Developer DashboardOpenAPI Spec (JSON)

Endpoint availability and request/response shapes are maintained in /api/v1/openapi.json. Use it as the source of truth for generated clients.

REST API

Submit reports programmatically from any system that can make HTTP requests.

  • Submit CSV, XLSX, or JSON data
  • Multi-file submissions for complex tools
  • Automatic tool detection
  • Idempotency key support
  • Activity logging & audit trail

Webhooks

Receive real-time notifications when events occur in File Display.

  • Report completed notifications
  • AI analysis included in payloads
  • HMAC-SHA256 signed payloads
  • Automatic retry with backoff
  • Delivery logs & monitoring

Authentication

All API requests require authentication using a Bearer token. Include your API key in the Authorization header.

Authorization: Bearer fd_live_xxxxxxxxxxxxx

API Key Prefixes

  • fd_live_ — Standard and AI Bundle keys
  • Key type is enforced server-side per endpoint.

Key Permissions

  • reports:read — Read reports, analytics, search
  • reports:write — Submit, export, manage tools/agent
  • webhooks:manage — Create & configure webhooks
  • key_type=ai_bundle — Required for /api/v1/ai/chat
  • Use a Standard key for all other /api/v1/* endpoints.

API Endpoints Reference

POSTWrite Operations
POST/api/v1/ai/chat

Direct AI chat/completions endpoint for external integrations, backed by the Vercel AI Gateway.

prompt or messages[]model?temperature?max_tokens?
Requires: ai_analysis feature + AI Bundle key type | Tier: Starter+
POST/api/v1/reports

Submit a report for processing. Supports single or multi-file submissions.

Requires: reports:write
POST/api/v1/reports/analyze

Generate AI analysis and optionally email it to recipients.

Requires: reports:write
POST/api/v1/agent

Send a message to the AI agent with tool-calling, memory, and context awareness.

Requires: reports:write
POST/api/v1/anomalies/detect

Run anomaly detection on submitted employee metrics. Returns detected anomalies with severity.

toolmetrics[]report_datenotify?
Requires: reports:write | Tier: Starter+
POST/api/v1/search

Semantic search across historical reports and AI analyses using natural language.

querytool_filter?limit?similarity_threshold?
Requires: reports:read | Tier: Pro+
POST/api/v1/agent/stream

Streaming AI agent endpoint (NDJSON). Real-time text chunks, tool calls, and memory updates.

messageconversationHistory?session_id?
Tier: Starter+
POST/api/v1/agent/memory

Add an entry to the AI agent's persistent memory.

categorycontentimportance?
Requires: reports:write | Tier: Starter+
POST/api/v1/webhooks

Create a webhook endpoint programmatically. Returns the signing secret (shown once).

nameurlevents?description?
Requires: webhooks:manage | Tier: Starter+
POST/api/v1/tools

Create a custom tool with HTML/CSS/JS code. Auto-creates v1 version.

namecodedescription?is_public?
Requires: reports:write | Tier: Starter+
POST/api/v1/tools/generate

AI-generate tool code from a natural language prompt.

promptneeds_backend?
Requires: reports:write | Tier: Starter+
POST/api/v1/filters

Create or update an AI analysis filter for a specific tool.

toolfilter_prompt
Requires: reports:write | Tier: Starter+
POST/api/v1/schedules

Create a recurring report schedule using cron expressions or intervals.

nametoolcrontimezone?
Requires: reports:write | Tier: Pro+
POST/api/v1/export

Request a bulk data export (JSON or CSV). Small exports return synchronously; large exports run as background jobs.

toolformat?days?include_analyses?
Requires: reports:write | Tier: Starter+
POST/api/v1/batch

Execute multiple API operations in a single request. Operations run in parallel.

operations[]
Max batch: 50 (Pro) / 500 (Enterprise) | Tier: Pro+
GETRead Operations
GET/api/v1/reports/:id

Get the status and details of a submitted report.

Requires: reports:read
GET/api/v1/reports/:id/download

Download the processed report file.

Requires: reports:read
GET/api/v1/analytics

Retrieve historical analytics data for a tool.

?tool=forklift?days=30?limit=100
GET/api/v1/activity

Get team activity feed including API submissions.

?days=7?tool=picker?limit=50
GET/api/v1/files

List stored input and output files.

?type=output?tool=forklift?days=14
GET/api/v1/files/:id

Download a specific file by ID (binary or base64 format).

?format=binary?format=base64
GET/api/v1/anomalies

List detected anomalies with optional aggregate stats.

?tool=forklift?severity=high?stats=true
Tier: Starter+
GET/api/v1/search/history

Retrieve past semantic search queries and result counts.

Tier: Pro+
GET/api/v1/agent/memory

List the AI agent's memory entries with category filters and search.

?category=insight?search=revenue
Tier: Starter+
GET/api/v1/agent/settings

Get the AI agent configuration (data sources, roles, memory limits).

Tier: Starter+
GET/api/v1/webhooks

List webhook endpoints with stats (deliveries, success rate, failures).

Requires: webhooks:manage | Tier: Starter+
GET/api/v1/webhooks/:id

Get webhook details with delivery history and success rate metrics.

Requires: webhooks:manage | Tier: Starter+
GET/api/v1/tools

List custom tools. Filter by visibility (private, shared, all).

?visibility=shared?limit=50
Tier: Starter+
GET/api/v1/tools/:id/versions

List all versions of a custom tool with code and default flag.

Tier: Starter+
GET/api/v1/tools/:id/data

Read persistent data entries stored by a custom tool via ToolAPI.

?key=mydata
Tier: Starter+
GET/api/v1/filters

List active AI analysis filters by tool name.

?tool=forklift
Tier: Starter+
GET/api/v1/filters/changelog

View AI filter change history — who changed what, when, and the old vs new filter.

Tier: Starter+
GET/api/v1/schedules

List recurring report schedules with run stats.

?active=true?tool=picker
Tier: Pro+
GET/api/v1/export

List previous data exports and their status.

Tier: Starter+
GET/api/v1/export/:id

Check export status or download completed export result.

Tier: Starter+
GET/api/v1/calendar/events

List calendar events in a date range for your company.

?from=ISO_DATE?to=ISO_DATE?visibility=public|company|all?event_type=meeting
Auth: Standard API key
GET/api/v1/calendar/available-slots

Return available time slots that do not overlap calendar events.

?from=ISO_DATE?to=ISO_DATE?slot_minutes=30?visibility=public|company|all
Auth: Standard API key
PUT / DELETEManagement Operations
PUT/api/v1/webhooks/:id

Update webhook configuration (URL, events, active status).

Requires: webhooks:manage
PUT/api/v1/tools/:id

Update a custom tool. Auto-creates a new version if code changes.

Requires: reports:write
PUT/api/v1/agent/settings

Update agent configuration (data sources, memory retention, custom instructions).

Requires: reports:write
PUT/api/v1/schedules/:id

Update a schedule (cron, active status, payload).

Requires: reports:write
PATCH/api/v1/anomalies

Acknowledge or dismiss a detected anomaly.

anomaly_idaction (acknowledge|dismiss)
Requires: reports:write
DELETE/api/v1/webhooks/:id

Delete a webhook endpoint.

Requires: webhooks:manage
DELETE/api/v1/tools/:id

Delete a custom tool and all its versions.

Requires: reports:write
DELETE/api/v1/agent/memory?id=:id

Remove a specific memory entry from the AI agent.

Requires: reports:write
DELETE/api/v1/schedules/:id

Delete a recurring report schedule.

Requires: reports:write

Endpoints by Group

GroupEndpointsWhat It Does
AI ChatPOST /api/v1/ai/chatDirect chat/completions endpoint via AI Gateway (requires AI Bundle key)
ReportsGET POST GET/:id GET/:id/download POST/analyzeSubmit files, track processing, download results, AI analysis with email
AnalyticsGETHistorical report data by tool (up to 365 days)
WebhooksGET POST GET/PUT/DELETE/:id POST/:id/test GET/:id/deliveriesFull lifecycle — HMAC-signed, auto-retry, delivery tracking
AnomaliesGET PATCH POST/detectStatistical anomaly detection, severity classification, acknowledge/dismiss
SearchPOST GET/historyNatural language semantic search via embeddings + pgvector
AgentPOST POST/stream GET/POST/DELETE/memory GET/PUT/settingsStreaming AI agent with 11 tools, persistent memory, configurable settings
ToolsGET POST GET/PUT/DELETE/:id versions data POST/generateCustom HTML/JS tools with versioning, cloud data, AI code generation
FiltersGET POST DELETE GET/changelogAI analysis context filters per tool with change history
SchedulesGET POST GET/PUT/DELETE/:idCron or interval-based recurring report schedules
ExportGET POST GET/:idBulk data export (JSON/CSV), sync or async job queue
CalendarGET/events GET/available-slotsCalendar event retrieval and open-slot discovery within a date range
BatchPOSTExecute up to 500 API operations in one request (parallel)
FilesGET GET/:idBrowse and download stored input/output files
ActivityGETTeam activity feed with filtering by tool, action, user, date

50+

Endpoints

16

Resource Groups

11

AI Agent Tools

4

API Tiers

Supported Tool IDs

Use these IDs for the tool parameter when submitting reports, querying analytics, or filtering data via the API.

forkliftForklift
pickerPicker
returnsReturns
shortsShorts
skipsSkips
truck_errorsTruck Err
replenReplen
reconciliationDiffs
loaderLoader
returns_intelRetIntel

Code Examples

Submit a Single-File Report

curl -X POST https://your-domain.com/api/v1/reports \
  -H "Authorization: Bearer fd_live_xxxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "picker",
    "file_name": "daily_report.xlsx",
    "file_type": "xlsx",
    "data": "<base64_encoded_file>",
    "idempotency_key": "report-2024-01-15"
  }'

Submit a Multi-File Report

Tools like Forklift and Replen support multiple input files:

curl -X POST https://your-domain.com/api/v1/reports \
  -H "Authorization: Bearer fd_live_xxxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "forklift",
    "files": [
      {
        "file_name": "day_shift.xlsx",
        "file_type": "xlsx",
        "data": "<base64_encoded_file_1>"
      },
      {
        "file_name": "night_shift.xlsx",
        "file_type": "xlsx",
        "data": "<base64_encoded_file_2>"
      }
    ],
    "idempotency_key": "forklift-2024-01-15"
  }'

Generate AI Analysis & Email

Send report data to generate an AI summary and email it to recipients:

curl -X POST https://your-domain.com/api/v1/reports/analyze \
  -H "Authorization: Bearer fd_live_xxxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "tool": "forklift",
    "data": "Employee,Moves,Hours\nJohn,150,8\nJane,200,8",
    "report_date": "2024-01-15",
    "send_email": true,
    "email_recipients": ["manager@example.com", "team@example.com"]
  }'

Retrieve Analytics Data

curl -X GET "https://your-domain.com/api/v1/analytics?tool=forklift&days=30" \
  -H "Authorization: Bearer fd_live_xxxxx"

Call the AI Bundle Chat Endpoint

Use an AI Bundle key from Developer Settings. Standard keys are rejected on this route.

curl -X POST https://your-domain.com/api/v1/ai/chat \
  -H "Authorization: Bearer fd_live_xxxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "Summarize this shift report and highlight risks",
    "model": "google/gemini-2.5-flash",
    "temperature": 0.4,
    "max_tokens": 1200
  }'

Get Calendar Availability Slots

Use a Standard key to fetch open slots from your company calendar.

curl -X GET "https://your-domain.com/api/v1/calendar/available-slots?from=2026-02-14T09:00:00Z&to=2026-02-14T18:00:00Z&slot_minutes=30&visibility=company" \
  -H "Authorization: Bearer fd_live_xxxxx"

AI Report Analysis API

The /api/v1/reports/analyze endpoint generates AI-powered summaries and can automatically email them to your configured recipients (both internal team members and external addresses).

Request Fields

  • tool — Required tool ID
  • data — Report data (text or base64)
  • is_base64 — Set true if data is base64
  • context_filter — Custom AI prompt (optional)
  • report_date — Report date (YYYY-MM-DD)
  • send_email — Email recipients (default: true)
  • email_recipients — Override recipient list
  • include_data_preview — Include data sample

Response Fields

  • analysis — AI-generated summary text
  • analysis_id — Saved analysis record ID
  • custom_filter_used — Filter that was applied
  • email.sent — Number of emails sent
  • email.results — Per-recipient status
  • processing_time_ms — Processing duration

Webhook Events

Configure webhook endpoints to receive real-time notifications. All payloads are signed with HMAC-SHA256 for security verification. Webhook payloads include AI analysis when available.

report.createdAPI submissions

Triggered when a new report is submitted via the API.

report.completedIncludes AI analysis

Processing finished successfully. Includes summary stats, output file info, and AI analysis.

report.failedError details

Report processing encountered an error. Includes error message and details.

Example Webhook Payload

{
  "event": "report.completed",
  "timestamp": "2024-01-15T10:30:00Z",
  "data": {
    "submission_id": "sub_abc123",
    "tool": "forklift",
    "status": "completed",
    "input_filename": "daily_report.xlsx",
    "output_filename": "forklift_report_2024-01-15.xlsx",
    "records_processed": 150,
    "processing_time_ms": 1250,
    "ai_analysis": "Summary: Top performer John Smith with 200 moves...",
    "date_range": "2024-01-15"
  }
}

Webhook Security

Verify webhook authenticity using the signature in the X-Webhook-Signature header.

// Node.js signature verification
const crypto = require('crypto');

function verifyWebhook(payload, signature, secret) {
  const expected = crypto
    .createHmac('sha256', secret)
    .update(payload)
    .digest('hex');
  return crypto.timingSafeEqual(
    Buffer.from(signature),
    Buffer.from(expected)
  );
}

Request Headers

  • X-Webhook-Signature — HMAC-SHA256 signature
  • X-Webhook-Timestamp — Unix timestamp
  • X-Webhook-Event — Event type

Retry Behavior

  • 5 retry attempts with exponential backoff
  • Auto-disabled after consecutive failures
  • Full delivery logs in Developer dashboard

Rate Limits

  • 100 requests per minute per API key
  • 10 MB max file size per request
  • Rate limit headers in response

Headers: X-RateLimit-Limit, X-RateLimit-Remaining, X-RateLimit-Reset

Error Codes

  • 400Bad Request / Validation Error
  • 401Unauthorized / Invalid API Key
  • 403Forbidden / Insufficient Permissions
  • 404Resource Not Found
  • 429Rate Limit Exceeded
  • 500Internal Server Error

Ready to integrate?

Create your API keys and configure webhooks in the Developer dashboard.

Developer Dashboard

Dashboard

The dashboard serves as the central hub for all platform activity. It provides immediate visibility into reports, projects, team members, communications, and historical activity. Every metric updates in real-time, and users can customize which modules appear based on their workflow preferences.

Overview Cards

  • Projects with status indicators
  • Quick access to all tools (built-in + custom)
  • Report counts with date filtering
  • Shared documents overview
  • Team members with role badges
  • Chat with unread indicators

Customization

  • Customizable dashboard title
  • Background color themes
  • Hide/show individual cards
  • Tab visibility preferences
  • Company-wide settings (admins)
  • Restore hidden tabs in settings

Activity History

The history tab provides a complete audit trail of all platform activity. Each entry includes the action type, user, timestamp, and where applicable, direct links to the relevant content (documents, chat sessions, reports).

Report Generated
Document Created
AI Summary
Chat Message
Role Updated
Filter Changed
Project Status
Member Joined

Processing Tools

File Display includes specialized tools built for your specific workflows. Each tool accepts Excel or CSV files, processes them entirely client-side using deterministic algorithms, and generates formatted reports. Processing typically completes in under 20 milliseconds, regardless of file size.

Report Output
File processed successfully
Report_Output_2026.xlsx

Custom Tools Built For You

Each tool is engineered specifically for your business processes. We analyze your existing workflows, understand your data formats, and build deterministic processing logic that transforms your raw input files into actionable reports.

Your Data Format

Works with your existing files

Your Logic

Custom calculations built in

Your Output

Reports formatted your way

Common Features Across All Tools

Drag-and-drop file upload
Real-time processing logs
Formatted Excel output
Automatic file storage
AI summary generation
Email notifications
Test mode support
Activity logging

Build Your Own Tools

Any team member can create custom tools using the Build editor on the AI page. Describe what you want in plain English and let AI generate the code, or write HTML/CSS/JS directly in a browser-based editor with live preview. Toggle Data Persistence to give your tool a cloud backend. Choose to keep tools private or share them with your whole team. Every save creates a new version you can switch between. Model selectors on Build and Chat auto-load from the AI Gateway model catalog, filtered by a platform allowlist for reliability.

AI Code Generator

Describe your tool in natural language. Toggle Data Persistence on/off to tell AI whether to wire up cloud storage.

Code Editor + Live Preview

Dark-themed editor with tab support and a real-time sandboxed iframe preview

Sandboxed Execution

Tools run in a sandboxed iframe — no access to app data, auth tokens, or external APIs

Data Persistence Toggle

One-click toggle tells AI to wire up ToolAPI for cloud storage — saves/loads data via postMessage bridge to JSONB

Version History

Every save creates a new version. Switch between versions, set any as default, load old versions into the editor.

Private or Shared

Keep tools private for your own use, or share with your whole team so they appear on everyone's Tools page.

Data Viewer

Tool creators can inspect all stored database entries for each tool directly from the tool card — no SQL needed.

ToolAPI — Cloud Data Persistence

Custom tools that need to remember data across sessions (trackers, todo lists, inventories) use ToolAPI — a bridge that safely proxies data between the sandboxed iframe and a company-scoped Supabase table via postMessage.

Save Data

await ToolAPI.save({ items: [...] });
await ToolAPI.save(data, 'settings');

Load Data

const data = await ToolAPI.load();
const all = await ToolAPI.loadAll();
Company-scoped RLS
JSONB storage (1MB limit)
Multiple named keys per tool
One-click toggle in AI generator

Version History

Every save automatically creates a new version. Each tool card shows a version dropdown where you can browse all versions, load any version into the editor, set any version as the default (live) version, or delete old versions. The tool runner page also has a version switcher to preview any version.

Auto-versionedSet defaultLoad into editorDelete old versionsPreview any version

Private or Shared

Tools default to private — only visible to the creator. Toggle "Shared with team" to make a tool visible to everyone in your company. Shared tools appear on all team members' Tools pages alongside built-in tools. Only the creator can edit shared tools.

Private by defaultOne-click shareCompany-scopedCreator retains edit access

Data Viewer for Tool Creators

Tool creators can inspect all backend data entries stored by their tools without leaving the Build page. Click the database icon on any tool card to expand an inline panel showing every key, its JSON value, and the last-updated timestamp.

Stored Data
refresh
_default2/7/2026 3:42 PM
{ "items": ["Widget A", "Widget B"], "count": 2 }
settings2/7/2026 1:15 PM
{ "theme": "dark", "sortBy": "date" }

2 keys stored via ToolAPI

Creator-only accessInline on tool cardView all keysJSON valuesTimestampsRefresh on demand

Great for building:

CalculatorsUnit ConvertersText FormattersData Entry FormsQuick ReferencesChecklistsTimersROI ToolsTodo ListsInventory TrackersExpense LogsHabit Trackers

AI Tool Builder — Generate from Spreadsheet

Upload any .xlsx or .csv file and AI will analyze its structure — column types, statistics, sample data — then automatically generate a complete file-processing tool with custom business logic tailored to your data. The generated tool matches the dark terminal UI used by the built-in warehouse tools (Picker, Forklift, etc.).

Upload & Analyze

Drag-and-drop a spreadsheet. The profiler extracts headers, column types, numeric stats, and sample rows — all client-side.

Optional Instructions

Describe what the tool should do, or leave blank and AI decides the best processing logic based on your data.

3-Phase AI Pipeline

Analyze → Generate → Review. AI plans the tool, writes full HTML/CSS/JS with ExcelJS integration, then self-reviews for bugs.

Live Preview & Save

Preview the generated tool in a live iframe, edit the code if needed, then save as private or shared with your team.

How It Works

📂Upload .xlsx/.csv
🔍Profile columns
🧠AI analyzes data
⚡Generates tool code
✨Reviews & polishes
💾Preview & save
Available to all usersDark terminal UIExcelJS integrationConsole loggingDownload outputsRun history via ToolAPIPrivate or shared

From Self-Service to Full-Build

Build is designed as a first step — it lets your team solve their own problems immediately by building lightweight front-end and backend tools without waiting on developers. Many day-to-day needs (calculators, trackers, checklists, data entry forms) can be solved entirely this way, with AI generating the code and ToolAPI handling data persistence.

SELF-SERVICE

Build

Create your own tools with AI + code editor. Sandboxed HTML/CSS/JS with cloud data persistence. Instant, no developers needed.

Need more?
Need more?
FULL-BUILD

Developer Tab

For complex solutions — integrations, multi-page apps, advanced logic — request a project and File Display's team builds it for you.

When to use each: If you can describe your tool in a sentence or two and it works as a standalone page (calculator, tracker, form), Build handles it. If your solution needs server-side logic, third-party API integrations, multi-page workflows, complex data pipelines, or tight integration with existing platform features — submit a project request in the Developer tab and we'll build it as a fully integrated feature on the platform.

Logic Fingerprinting

Every tool's processing logic is deterministic and version-controlled. This means the exact same input will always produce the exact same output—creating a "fingerprint" that guarantees transparency and enables precise comparisons between versions.

Why It Matters

  • Auditability: Every calculation can be traced and verified
  • Reproducibility: Re-run any historical report with identical results
  • Version Control: Compare outputs before and after logic updates
  • Trust: No black-box AI randomness in core calculations

How It Works

  • All processing logic lives in versioned source code
  • Transformations are explicit functions, not learned models
  • Every change is tracked with full commit history
  • Test mode enables safe validation of logic changes
Deterministic Processing Example

// Same input → Same output, every time

Input: forklift_data_2026.xlsx

Logic: v2.4.1 (git: a3ec539)

Output: LOPSPRO_Report_Jan27.xlsx

Fingerprint: sha256:8f2a9c...

Test Mode

Every tool includes a Test Mode toggle that lets you experiment safely. Reports generated in test mode are flagged and automatically excluded from analytics, leaderboards, and business metrics — so you can validate changes without polluting real data.

Safe Experimentation

Test new logic or unfamiliar data without consequences

Clean Analytics

Test data never appears in dashboards or reports

Version Comparison

Compare outputs between logic versions safely

When to Use Test Mode

→Validating a new version of processing logic
→Training new team members on tool usage
→Testing with sample or synthetic data
→Debugging unexpected output formats
→Comparing before/after when logic changes
→Demonstrating features to stakeholders
Δ

File Display AI

New

Deltamorph Desktop Download

Download the standalone Deltamorph AI code editor.

Open Deltamorph Download Page

File Display AI provides conversational assistance and automated report analysis. The AI layer complements rather than replaces deterministic processing—data transformation remains algorithmic and verifiable, while AI provides interpretation and insights that can be automatically distributed to your team.

AI Assistant

AI Chat Assistant

  • Conversational interface for questions about operations
  • Knowledge of the complete File Display system
  • Session history with team sharing (optional)
  • Private mode for sensitive queries
  • Collaborative sessions with join capability

AI-Enhanced Reporting

Transform raw reports into actionable insights with integrated AI summaries. Configure exactly how summaries are generated and who receives them—all from the Email Settings panel on each tool.

Configurable Email List

Set up recipient lists per tool. When a report is processed, AI summaries can be automatically emailed to managers, supervisors, or anyone who needs to stay informed—without them needing to log in.

Custom AI Filters

Define exactly what the AI focuses on for each tool. Want summaries that prioritize errors? Focus on top performers? Highlight trends? Configure the AI filter to match your operational priorities.

AI Summary Features

Generate intelligent analysis of any processed report. The AI extracts key findings, identifies patterns, and ranks insights by importance.

  • Critical issues highlighted first
  • Top/bottom performers with exact numbers
  • Pattern and trend identification
  • One-click email with summary attached

Email Settings Panel

Access the Email Settings from any tool's reports tab to configure both recipient lists and AI behavior.

  • Add/remove email recipients per tool
  • Custom AI filter prompts per tool
  • Filter change history tracking
  • Editor+ permission required to modify

RAG Semantic Search

Ask questions about your historical reports using natural language. Powered by vector embeddings (pgvector), File Display understands the meaning of your queries — not just keywords.

Natural Language Queries

Ask questions like "Show me our best forklift performance days" or "When did we have the highest picker efficiency?"

  • Searches across all historical reports
  • Ranks results by relevance
  • Returns context and summaries

Historical Context in AI

AI analysis now automatically includes relevant historical data for comparison, enabling insights like "productivity is 15% above last month's average."

  • Auto-fetches similar past reports
  • Compares current vs historical metrics
  • Identifies trends over time

AI-Powered Anomaly Detection

Automatically detect unusual employee performance by comparing against rolling historical baselines. Get alerted when metrics deviate significantly from expected ranges.

Rolling Baselines

Maintains 30-day rolling averages per employee/metric with standard deviation tracking.

Smart Alerts

Email notifications for medium, high, and critical severity anomalies with severity badges.

Severity Levels

Critical (>50%/3σ), High (>35%/2.5σ), Medium (>20%/2σ), Low thresholds.

How it works: When a report is processed, File Display compares each employee's metrics (moves/hour, scan %, etc.) against their personal baseline. Significant deviations are flagged, stored for review, and optionally trigger email alerts. Acknowledge or dismiss anomalies directly from the dashboard widget.

Distributed Caching (Redis)

Enterprise-grade caching layer using Upstash Redis for lightning-fast data access and reduced database load.

What's Cached

  • User profiles & company settings
  • API key validation (10x faster)
  • Rate limiting counters
  • Permission lookups

Benefits

  • Automatic cache invalidation on updates
  • Graceful degradation if Redis unavailable
  • Pattern-based cache clearing
  • Admin monitoring endpoint

Background Job Queue (QStash)

Serverless job queue system using Upstash QStash for reliable background processing with automatic retries.

Job Types

  • Email delivery with retries
  • Webhook delivery + backoff
  • AI analysis generation
  • Anomaly detection

Reliability Features

  • Exponential backoff retries
  • Dead letter queue
  • Job status tracking
  • Deduplication support

Monitoring

  • Job logs per execution
  • Stats by job type
  • Progress tracking
  • Cancel/retry from API

How it works: Long-running operations are queued instead of blocking API requests. Workers process jobs in the background with automatic retries on failure. Jobs are tracked in the database with full logging, allowing you to monitor progress and diagnose issues. If QStash is unavailable, the system gracefully falls back to synchronous processing.

API Tiers & Usage Control

Tiered API access with feature gating, rate limiting, and usage quotas. Manage per-company access levels from GodView.

Free
$0
100 req/day
Starter
$29/mo
1,000 req/day
Pro ⭐
$99/mo
10,000 req/day
Enterprise
Custom
Unlimited

Feature Gating

  • Webhooks (Starter+)
  • AI Analysis (Starter+)
  • Batch Uploads (Pro+)
  • Semantic Search (Pro+)

Rate Limiting

  • Per-minute limits
  • Per-day limits
  • Monthly quotas
  • Custom overrides

Admin Controls

  • GodView tier panel
  • Per-company settings
  • Usage monitoring
  • Quota progress bars

How it works: Each company is assigned a tier that determines their API access. Features are automatically gated based on tier - requests to disabled features return clear upgrade prompts. Usage is tracked monthly with quota enforcement. Admins can override limits per-company from GodView.

Error Tracking & Monitoring (Sentry)

Centralized error tracking with Sentry integration, structured logging with correlation IDs, and comprehensive health checks.

Error Tracking
  • Sentry integration
  • Stack trace capture
  • Session replay
  • Error grouping
Structured Logging
  • JSON log format
  • Correlation IDs
  • Log levels
  • Request tracing
Health Checks
  • /api/health endpoint
  • Component status
  • Liveness/readiness
  • Alert aggregation

How it works: Errors are automatically captured and sent to Sentry with full context including user info, correlation IDs, and stack traces. The structured logging service outputs JSON in production for easy aggregation by log management tools. Health check endpoints provide status for load balancers, while the monitoring API aggregates alerts from multiple sources including failed jobs, anomalies, and API errors.

AI Agent System

A full agentic AI system that serves as your company's intelligent data hub. The agent can query your entire database (reports, designs, documents), synthesize cross-report analyses, send email notifications, and maintain a persistent company knowledge base ("snowball" memory). All access is governed by role-based permissions configured per company.

Agent Chat Interface

The /chat page is a full agentic hub with streaming NDJSON responses, inline tool call visualization, and generative UI components (tables, charts, summaries).

  • Multi-tab interface: Chat, Build, Log, Memory, Settings
  • Streaming responses with tool call display
  • Generative UI: dynamic tables, charts, summaries
  • Chat history persistence via Supabase

Agent Tools

The agent uses Gemini function calling to invoke a suite of specialized tools. It can chain multiple tools for complex multi-step analyses.

  • queryReports — query any report type by date/filters
  • synthesizeReports — cross-report analysis
  • sendEmail — send notifications via Resend
  • searchKnowledge, addToMemory, getMemoryContext
  • queryDocuments, getCompanyStats

Company Knowledge Base ("Snowball")

The agent maintains a persistent, growing knowledge base per company. Insights, summaries, facts, decisions, and preferences are stored with importance ratings and auto-expire after configurable periods. This "snowball" gives the AI cumulative context about your operations over time.

Memory Categories

Insight, Summary, Fact, Decision, Preference — each with 1-5 importance rating and optional expiry.

Full-Text Search

Search the knowledge base using natural language. The agent automatically includes relevant memory in its system prompt.

Auto-Pruning

Expired entries are automatically cleaned. Founders can prune manually or configure retention policies.

Role-Based Agent Access

The founder configures which roles can access the agent and which data sources are available. Permissions are enforced server-side via RLS and the agent_settings table.

FounderFull access + Settings
AdminChat + Memory R/W
EditorChat + Memory R/W
ViewerChat + Memory Read

Founder Settings Panel

Company founders have a dedicated settings tab in the agent interface to configure every aspect of the AI's behavior.

  • Toggle data sources (reports, designs, docs)
  • Configure per-role permissions
  • Set memory retention period
  • Custom AI instructions
  • Enable/disable auto-synthesize and digest emails

Agent Architecture

Database Tables

3

agent_memory, agent_settings, agent_tasks

Core Modules

5

permissions, memory, tools, orchestrator, renderer

API Routes

7

chat, settings, memory, stream, v1 endpoints

Agent Tools

12

Gemini function calling via structured schemas

Team Collaboration

Real-time collaboration is built into every aspect of the platform. Team members can communicate through chat, see who's online across every page, and work together on documents simultaneously — all powered by Supabase Realtime. Everything is company-scoped with full data isolation between organizations.

Team Chat

Full groupchat auto-created per company. Create direct messages or custom groups. Real-time delivery with per-room notifications.

Calendar

Full scheduling system with personal & company calendars. Grid and list views.

@Mentions

Tag team members in any chat. Mentions appear in dedicated inbox with unread counts. Mark as read individually.

Member Profiles

Click any team member to view their profile with quick actions: start a DM, begin AI chat, create shared doc, or view activity.

Support

Built-in support channel for admin communication. Threaded conversations with read receipts.

Member Quick Actions

From any team member's profile modal, instantly launch collaboration workflows:

Start Chat

Opens private DM instantly

AI Chat

Ask AI about member activity

Start Doc

Private doc just for you two

View Activity

Filter history by member

Notification System

Granular control over notifications with 10 configurable categories. Each user can customize their preferences, and security alerts remain always-on for account protection.

Reports
Chat
Mentions
Projects
Support
AI Summaries
Documents
Weekly Digest
Security
Marketing

Calendar — Scheduling System

Personal & company calendars for team scheduling

A full-featured calendar and scheduling system at /calendar. Every user has a personal calendar and a shared company calendar. Events can be published to one or both and appear on the calendars of all invited participants.

Grid & List Views

Switch between a month grid view with color-coded event chips and a chronological list view with expandable event details. Both views update in real-time via Supabase Realtime.

Personal & Company Calendars

Toggle between “My Calendar” (events you created or are invited to) and “Company” (shared events visible to the whole team). Events can target personal, company, or both calendars.

Event Types

General, Meeting, Deadline, Reminder, Task

Activity Logging

All events logged to history in real time

Event Details

Each event includes: title, description, event type with color coding, date/time range (or all-day), location, external links, participant selection from the team roster, and visibility controls (personal / company / both).

Multiplayer — Real-Time Presence

See who's online, where they are, and what they're doing

Every page in the platform is multiplayer-aware. Supabase Realtime powers live presence tracking, typing indicators, and collaborative editing across the entire app. Users see real-time avatar stacks showing who else is on the same page, and typing indicators appear in chat and document editors as colleagues type.

Company-Wide Presence

A global presence channel tracks every online user across the platform. The dashboard shows who's online right now, which page they're on, and what resource they're viewing. Presence heartbeats auto-expire stale sessions.

Collaborator Avatars

Every page that supports collaboration shows a live avatar stack of current viewers. Avatars appear in documents, design studio, chat rooms, reports, and tool pages with colored rings and tooltips.

Typing Indicators

Broadcast-powered typing indicators show who is currently composing a message or editing a document. Debounced to avoid flickering, with automatic timeout if the user stops typing.

Real-Time Sync

Postgres Changes subscriptions push database updates to all connected clients instantly. New chat messages, document edits, tool creations, and call events appear live without polling or page refresh.

Multiplayer-Enabled Pages

DashboardTeam ChatDocumentsAI ChatCalendarPickerForkliftReturnsShortsReplenLoaderTruck ErrorsReturns IntelSkips

Shared Documents

The documents module provides a collaborative workspace for team documentation. Documents update in real-time across all connected users, with automatic saving and version tracking. Sharing controls allow documents to be private, company-wide, or shared with specific team members.

Document Features

  • Real-time collaborative editing
  • Auto-save with timestamp display
  • Mobile-optimized editor
  • Deep linking from activity log

Sharing Modes

  • Private: Only you can access
  • DM Style: Shared with just one other person
  • Company: All team members can view/edit
  • View Only: Company can view but not edit

DM-Style Documents

Create private documents shared exclusively between you and one other team member. Perfect for 1:1 notes, feedback, or private collaboration. When you click "Start Doc" from a member's profile, a DM document is automatically created with RLS-enforced privacy—only you two can see it.

Database-Level Privacy

RLS policies enforce access for exactly two users

No Team Activity Log

DM docs don't appear in company-wide activity

Real-Time Sync

Both users see edits instantly like any doc

Account & Settings

Personalize your experience with comprehensive account settings. Manage your profile, customize which dashboard tabs are visible, configure notification preferences, and access your activity history including mentions and support conversations.

Profile Settings

  • Display name customization
  • Password management
  • Tab visibility controls (show/hide)
  • Email notification preferences

Activity & Mentions

  • View mentions from team members
  • Support conversation history
  • Direct message support team
  • Unread notification badges

Granular Member Permissions

Founder-Only Feature

Client ID founders have complete control over every permission for every member in their organization. Configure exactly what each team member can access, see, and modify with unprecedented granularity.

Tool Access Control

  • Show/hide any of the 11 tools per user
  • Control Analytics tab visibility per tool
  • Control AI tab visibility per tool
  • Independent settings per team member

Settings Access Control

  • Email settings edit permissions
  • AI filter configuration access
  • Report settings management
  • Dashboard tab visibility overrides

Permission Presets

Create and save reusable permission templates that can be quickly applied to multiple team members. Perfect for onboarding new users or standardizing access levels across departments.

Save custom presetsOne-click applyBulk updatesReset to defaults

Analytics & Tracking

Every action in File Display is logged and trackable. The activity log provides a complete audit trail, while the leaderboard gamifies team engagement with weighted activity scoring. Admins have access to platform-wide analytics for comprehensive oversight.

Activity Tracking

  • Filter by user, date range, action type
  • Direct links to related content
  • Detailed action metadata
  • Export capabilities

Team Leaderboard

Activity scoring with weighted points:

Reports generated+10
AI summaries+5
Documents created+3
Chat messages+1

Security & Permissions

Security is foundational to File Display. Every database table enforces Row Level Security policies that automatically filter data by company. Users never see data from other organizations, and this isolation is enforced at the database level, not the application level.

1

Authentication

JWT-based secure login

2

Row Level Security

Database-enforced isolation

3

Company Boundaries

Automatic data partitioning

4

Role Permissions

Granular access control

Role Permissions

FounderFull access, assign admin role, company settings
AdminFull access except assigning admin, manage members
EditorGenerate reports, create documents, use chat
ViewerRead-only access to all content

Multi-Tenancy Architecture

File Display uses a shared-database, company-isolated architecture. Every table includes a company_id foreign key, and Row Level Security policies ensure users can only access rows belonging to their company. This provides the isolation benefits of separate databases with the operational simplicity of a single schema.

Company assignment

By email domain

Data filtering

Database-enforced

Zero trust

RLS on every query

Technical Architecture

File Display is a full-stack multi-tenant platform built on Next.js 16 with React 19, backed by Supabase (PostgreSQL + pgvector + RLS), dual AI providers, distributed caching, background job queues, and a Deltamorph desktop AI code editor. All file processing is client-side for maximum speed; the server handles auth, AI orchestration, email, and data persistence.

Upload
Process
Store
Analyze
Report

Frontend

  • Next.js 16: App Router + Turbopack
  • React 19: Server components, hooks, Suspense
  • TypeScript: Strict type-safe codebase
  • Tailwind CSS 4: Utility-first styling
  • ExcelJS: Client-side spreadsheet processing
  • Recharts: Data visualization & charts

Backend & Data

  • Supabase: Auth, DB, realtime, storage
  • PostgreSQL + pgvector: RLS multi-tenancy & vector search
  • Upstash Redis: Distributed caching & rate limiting
  • Upstash QStash: Background job queues & cron
  • Resend: Transactional & notification email
  • Sentry: Error tracking & performance monitoring

AI & Infrastructure

  • Vercel AI Gateway: Unified multi-provider inference layer for chat, build, and API endpoints
  • Google/OpenAI/Anthropic models: Selected via Gateway allowlist and dynamic model discovery
  • Vercel: Hosting, serverless, edge CDN
  • Deltamorph Desktop: AI code editor for macOS & Windows (VS Code fork)
  • GitHub Actions: CI/CD for web & desktop builds
  • MCP Server: AI agent integration protocol

Multi-Tenancy Model

Every table includes a company_id column with Row Level Security (RLS) policies that automatically scope all queries to the authenticated user's company. Users are assigned to companies by email domain. Application code never manually filters by company — RLS handles it at the database layer.

RLS on every tableEmail-domain routingCompany-scoped storageZero-trust data isolationAutomatic profile linking

Model Context Protocol (MCP)

File Display integrates with the Supabase MCP server, connecting AI-powered development tools directly to the platform's database and services. MCP (Model Context Protocol) is an open standard that lets AI assistants interact with live data sources securely.

What MCP Enables

  • Live Database Queries: AI assistants can query tables, inspect schemas, and run SQL directly against the project database
  • Migration Management: Apply and list database migrations without leaving the editor
  • Edge Function Deployment: List, inspect, and deploy Supabase Edge Functions
  • Type Generation: Auto-generate TypeScript types from the live database schema
  • Logs & Debugging: Retrieve service logs for API, Postgres, Auth, Storage, and Realtime

How It Works

1A .cursor/mcp.json config file connects the IDE to the Supabase hosted MCP server
2Authentication happens via browser-based OAuth — no access tokens or secrets needed
3The AI assistant gains access to database tools, migrations, edge functions, logs, and docs
4All operations respect your Supabase account permissions and organization access

Available MCP Tools

list_tables

List database tables

execute_sql

Run SQL queries

apply_migration

Apply migrations

list_migrations

View migrations

get_logs

Retrieve service logs

get_advisors

Security & perf tips

generate_types

TypeScript types

deploy_function

Deploy edge functions

list_extensions

Postgres extensions

get_project_url

Project API URL

get_anon_key

Publishable keys

search_docs

Search Supabase docs

Permawrite License

File Display is licensed under the Permawrite License, a proprietary license framework issued by The Parent Holding Company. The Permawrite License anchors permanent, verifiable ownership to the Arweave blockchain — a decentralized, immutable storage network where data persists indefinitely and cannot be altered, overwritten, or removed by any party.

◆

Permanence

Ownership records are immutable — they cannot be erased, overwritten, or lost due to platform changes or domain expiration.

◈

Verifiability

Every license is recorded on the Arweave blockchain and independently verifiable by any party at any time.

◇

Attribution

Attribution is a structural requirement enforced by the license and backed by an immutable on-chain ledger.

Key Provisions

•

All rights reserved by the original author with permanent, on-chain proof of ownership.

•

AI-assisted work is owned by the human author who directed the output — AI is a tool, not a co-author.

•

Any use must prominently credit the original author and reference the Arweave licensing transaction.

•

No forking, redistribution, or derivative works without explicit written permission from the author.

View Full License

Platform Statistics

Current repo snapshot as of March 2026. Core app logic is TypeScript-first with 99 Supabase tables, 14 Greco tools, DockVoo, and a centralized daily stats aggregator.

0

Lines of Code

0

Source Files

0+

API v1 Routes

0

Supabase Tables

0

Git Commits

43

Pages

22

Components

25

Dependencies

Codebase Breakdown

Frontend Pages

43

React pages with routing

API Routes

123

Server-side endpoints (35+ external v1)

Library Modules

74

Shared utilities & hooks

TypeScript / TSX

87.5k

Lines across 298 files