MARS Help Guide
macOS Artifact Recovery Suite
MARS is a data extraction and recovery toolkit for macOS that salvages and recovers SQLite, plist, log, and cache data from a set of raw, carved files and matches them with artifacts of forensic interest from a reference system.
In some cases, MARS can recover thousands more database rows and hundreds of extra days of data beyond what exists in the original reference files alone.
- Exemplar
- The set of baseline target artifacts (databases, logs, etc.) from a reference system that forms the "ground truth" for recovery.
- Candidate
- An unclassified file MARS will recover then attempt to match to exemplar artifacts.
- Rubric
- A JSON file that contains a per-column matching guide for a given database.
How It Works
Exemplar Scan
MARS uses a catalog of known artifacts to collect target files from an exemplar system. It can scan most disk image formats (EWF, etc.), folders, archives, and live macOS systems.
Artifacts with associated archives - like Powerlog's .gz backups - are automatically decompressed, deduplicated, and combined.
Databases are then "fingerprinted" column-by-column to create rubrics for matching candidates against.
Candidates Scan
The four-stage recovery and vetting process ensures that all candidate data that can be recovered is recovered.
MARS assesses and classifies the recovered data - including from within corrupt SQLite databases - then matches it against exemplar rubrics.
Truly unrepairable databases are byte-carved with protobuf extraction and timestamp detection for manual analysis.
Reports
Both Exemplar and Candidate reports provide quick links to artifact folders and module reports, such as WiFi history and Biome parsing. Data Comparison Reports show exactly how much data you've gained beyond baseline, measured in rows and days, and include a comprehensive zoomable timeline.
Export Options
Export original Exemplar files, matched Candidates, or Both. The full-path option recreates the original file and folder structure, making the data easily parsable by external tools such as mac_apt, APOLLO, plaso, and others.
The combined export deduplicates and merges data while maintaining its integrity. Discrete user and profile account data is never mixed. An optional database source column marks each row's origin - so you can always trace the information back to its source.
Free Scan
If you just want to salvage corrupt SQLite databases, MARS can do that, too. Run the recovery pipeline on any set of files to automatically recover as much data as possible. Try it on the SQLite Forensic Corpus to see how it works.
Additional Features
- Mount EWF images directly in macOS via FUSE-T
- Automatic pseudo-logarchive creation, ready for Unified Logs parsing
- Plotly data visualization for SQLite
- Add and edit targets using the Artifact Recovery Catalog (ARC) Manager
- Export and import anonymized exemplar catalog packages to share with other MARS users
v1.0 Report Modules
- WiFi activity and location mapping
- Biome parsing
- Firefox JSONLZ4 parsing
- Firefox cache parsing (extract images, HTML, etc.)
Enter h in the MARS menu to open this help file.
Quick Start
The typical MARS workflow follows these steps:
Create a Project
Set up a new case with project name, examiner info, and case number.
Run Exemplar Scan
Scan a reference macOS system to extract target artifacts, create matching rubrics, and run exemplar report modules.
Run Candidates Scan
Process carved/recovered files, classify them against your exemplar schemas, and run candidate report modules.
Generate Reports and Review Data
Investigate recovered data and create exemplar vs recovered comparison reports.
Export Data
Package results for external forensic tools.
Project Management
MARS organizes all analysis within projects. Each project contains exemplar scans, candidate scans, reports, and exported data.
Creating a New Project
- Select New Project from the main menu
- Choose a directory for the project folder
-
Enter project details:
- Project Name – Descriptive name for the case
- Examiner Name – Your name (optional)
- Case Number – Evidence or case identifier (optional)
- Description – Additional notes (optional)
Opening an Existing Project
Select Open Project and browse to a .marsproj file, or use
Last Project to quickly reopen your most recent project.
Project Structure
Each project creates the following structure:
ProjectName/
│
├── ProjectName.marsproj # Project metadata
├── project.db # Scan tracking database
├── .marsproj # Project configuration file
│
├── output/
│ ├── index.html # Index of all scans
│ │
│ └── MARS_ProjectName_YYYYMMDD_HHMMSS/ # Exemplar folder
│ │
│ ├── exemplar/ # Exemplar scan results
│ └── candidates/ # Candidates scan folders
│
├── exports/ # Exported project data
└── reports/ # Generated HTML reports
A project may contain multiple exemplar scans, and each one has its own folder within
the output folder.
Likewise, each exemplar scan may contain multiple candidates scans. Each one has its own
timestamped folder within the exemplar's candidates folder.
The exports folder contains all exported data, including original exemplar
files, recovered candidate files, and combined datasets.
Exemplar Scan
After creating a project, you'll typically start by running an exemplar scan, the first option on the main menu. Start by selecting the reference system source. Then, unless you're scanning a live system, use the file explorer to select your image, archive, or folder.
Source Types
| Source | Description |
|---|---|
| Disk Image | E01, Ex01, DD, DMG, or other forensic image formats |
| Directory | Mounted macOS volume or exported file system folder |
| Live System | Scan the running macOS machine (requires Full Disk Access) |
| Archive | TAR, ZIP, GZIP containing macOS file structure |
Image Scan
MARS can scan forensic disk images directly using dfVFS. After choosing the image, MARS will prompt you to select the target partition. For APFS drives, MARS will highlight the suggested Data partition with a green star (★) symbol.
Live System - Full Disk Access (FDA)
When scanning a live system, MARS requires Full Disk Access permission to read protected files.
Additionally, we recommend you run a live system scan with elevated privileges (sudo) to ensure access to all protected files. MARS will prompt you to enter your password before scanning.
Directory/Archive
MARS can scan regular directories and archives (TAR, ZIP, GZIP) containing macOS file structures. Be aware that large archives may take a while to process. If possible, consider extracting the archive to a directory first.
Post-Exemplar Scan
Once the scan finishes, MARS will prompt you to open the scan report document, which provides quick access to key folders and reports.
If you decline, you can still access the scan results later by navigating to the
exemplar scan's reports folder. Co-located is the
exemplar_scan_summary.txt file, which contains detailed information about
the scan results on a per-file basis.
Exemplar Structure
Each exemplar scan creates the following folder structure:
MARS_ProjectName_YYYYMMDD_HHMMSS/
│
├── case_metadata.json # Scan metadata
├── candidates/
│
├── exemplar/
│ ├── caches/
│ ├── keychains/
│ ├── logs/
│ │
│ └── databases/
│ ├── catalog/ # Combined/deduped databases for matching
│ ├── encrypted/ # Encrypted databases
│ ├── originals/ # Individual components of combined databases
│ └── schemas/ # Catalog rubrics/schemas and hash map
│
└── reports/
├── exemplar_scan.html
├── exemplar_scan_summary.txt
└── [individual module reports folders]
Each subfolder within the exemplar folder contains the output of a specific
file type.
-
caches– Cache files, e.g., Biome streams, Spotlight indexes, and browser data keychains– Encrypted keychains, both system and user-
logs– Log files, e.g., Apple System Logs, Bluetooth devices, and WiFi analytics databases– SQLite databases only
Exemplar Database Folders
The databases folder is different. It contains several subfolders.
| Directory | Description |
|---|---|
catalog/ |
Target SQLite files from the artifact catalog. Each subfolder contains the named
database and a {filename}_provenance.json file, which tracks file data
and origin information. (See below for details.)
|
encrypted/ |
Databases that appear to be encrypted are routed here. Since no rubric can be created, these databases are not used for matching. |
originals/ |
Contains the original, uncombined SQLite databases from the artifact catalog, including archives. You might also find "dummy" databases here that macOS creates, which have no usuable schema (only a single 'integrityCheck' table). |
schemas/ |
Contains schemas and rubrics for each catalog database as well as a
exemplar_hash_lookup.json file, which contains the hash lookup table
for O(1) schema matching.
|
Exemplar Catalog Folder
Each database folder and filename tells a story.
If a database type has multiple files with the same schema, they are combined into a
single file. Likewise, if the database has archived copies, they're decompressed,
combined, and deduplicated. Combined files are indicated with a
.combined filename extension, and each source database is listed in the
provenance.json file.
If a database type contains multiple files with different schemas, they are marked with
version numbers: _v1, _v2, etc.
Some databases are marked with an _empty suffix. This indicates that the
file contains no useful data, but since the schema is intact, it's still possible for
candidates data to be matched to it. For that reason, these database "shells" are kept
in catalog/.
Databases associated with a macOS user account are named with the user's UID.
So, Biome Databases 1_jdoe_v2_empty contains the second schema version of
user jdoe's Biome 1 database, which is also empty.
Database folders from applications that use profiles (e.g., web browsers) include
subfolders for each profile. So, Chrome Cookies_jdoe might contain several
subfolders, like Default, Profile 1, etc., each with its own
database.
Next Steps
If you're using MARS only to gather files and use the reports, you can skip ahead to Reports.
If you're looking to salvage and match carved files, continue to the next section.
Candidates Scan
The candidates scan processes carved or recovered files (typically from tools like PhotoRec, Scalpel, X-Ways, Encase) and classifies them against your exemplar schemas.
Selecting an Exemplar
Before running a candidates scan, you must have an exemplar available:
- Project Exemplar – Use a scan from the current project
- Imported Package – Use an exemplar package exported from another project
Selecting a Candidates Folder
Depending on your goals, you can select the entire carved files output folder, or a subset. Whichever folder you choose, MARS will scan it recursively. Files that have a very low chance of miscategorization as a target system file (image files, etc.) will be skipped.
The remainder will be introspected, scanned for magic bytes and other key signatures,
then classified. Archives will be automatically decompressed and processed, including
gzrecover to
crack corrupt .gz files.
One of the most important aspects of SQLite data recovery is the
lost_and_found table that the sqlite3 .recover function
creates.
When a database becomes corrupt, the sqlite3 .recover command can be
used to attempt recovery. However, even if the general schema can be made whole again,
data is often "lost" within the pages of the database. In an attempt to preserve all
data, sqlite3 creates a lost_and_found table, a sort of
"catch-all" of leftover "lost" rows.
A database's L&F table could contain data from any or all of its tables, and some of it may be corrupt or incomplete. In our test cases, some L&F tables were hundreds of thousands of rows long and hundreds of columns wide. After all, the table must contain as many columns as the longest row of lost data.
One of the challenges of MARS was to parse these lost_and_found tables
and match the data to exemplar schemas. MARS does this through the use of rubrics. For
example, each column of each table is analyzed for semantic roles, like
timestamp, uuid, or email. Along with other
datapoints, these create a "fingerprint" for the table.
Likewise, contiguous blocks of L&F data are "fingerprinted", and where the rubric and L&F "fingerprints" coincide, we have a probable data match.
Post-Candidates Scan
Once the scan finishes, MARS will prompt you to open the scan report document, which
provides quick access to key folders and reports. If you decline, you can still access
the scan results later by navigating to the candidates scan's
reports folder.
Candidates Structure
Each candidates scan creates the following structure within an exemplar's
candidates folder:
YYYYMMDD_HHMMSS/
│
├── metadata.json # Scan metadata
├── caches/
├── logs/
├── databases/
│ ├── carved/ # Byte-carved corrupt databases
│ ├── catalog/ # Exact schema matches to exemplar catalog
│ └── empty/ # Catalog matches without usable data
│ └── found_data/ # Unmatched or near-matched salvaged data
│ └── metamatches/ # Intact non-catalog databases
│ └── schemas/ # Metamatch schemas
│ └── selected_variants/ # Individual recovered carved databases
└── reports/ # Candidates HTML reports
Candidates Database Folders
| Directory | Description |
|---|---|
catalog/ |
Exact schema matches to known exemplar databases. These can consist of carved
(intact or mostly intact) data, L&F data, or a mix of
both.
Some databases have a rejected subfolder, which contains an SQLite
database of data that didn't meet rubric or datatype standards during the matching
process. These files may still contain useful data, so make sure to review them.
|
empty/ |
Databases may be exact catalog matches, but contain no usable data. They are moved
into the empty folder for reference. Don't ignore this folder, though, as
empty databases may also contain rejected data.
|
found_data/ |
Orphaned data from L&F tables that couldn't be matched. If the data was determined to be a near match to an exemplar, the name will contain a match hint. |
metamatches/ |
Not all carved databases will be matched to exemplars, but they still may contain
important information. So, MARS matches them among themselves, or
metamataches, to create a strong rubric for matching L&F data.
Databases with identical schemas are combined and deduplicated for ease of browsing. The file/folder name is based off the first table name. |
schemas/ |
Similar to the schemas directory in the exemplar scan, this contains schema and rubric for metamatches. |
selected_variants/ |
Contains the "best" variant of each carved database. (See below for details.) |
Candidates Catalog Folder
The Candidates catalog folder is similar to the exemplar catalog folder, but there are some key differences.
With exemplar data, there's no doubt as to which user or profile the database belongs to. But with L&F data, it's not always so clear-cut.
For example, imagine the following exemplar system:
- Two discrete users
- Each user has three Chrome profiles, each with its own separate set of Chrome databases
- The schemas are identical between users/profiles for a given Chrome database
MARS will attempt to match L&F data to the exemplar schemas, but since the schemas are
identical, it can't confidently distinguish between the different user profiles. To
avoid attributing data to the wrong user or profile, MARS creates a
_multi database to indicate that the data therein could be from any of the
Chrome profiles.
In cases where some but not all tables match between database types, only
those tables they have in common will be added to the _multi database.
The first step in the recovery process is to select the "best" variant of each database. What makes a variant "best"? MARS uses an internal scoring system to determine it.
First, it has to open. If a database opens, is valid, and has data, MARS select this
original variant and moves on to the next database. If it doesn't open, MARS will try
to clone the database, which can sometimes fix schema issues, then
attempt to recover it using the .recover command. If a
lost_and_found database is created, it stores that data for matching.
If the database is determined to be an exemplar catalog match, MARS will go a step
further and use DoD Cyber Crime Center's
sqlite_dissect
tool. It can potentially extract even more data from severely corrupted databases than
.recover.
Each variant is scored based on the number of rows and tables recovered, a Jaccardian comparison with an excerpt of exemplar data, and the overall integrity of the database. In a tie, the variant closest to the original file (in terms of processing steps) is selected.
If the file fails all attempts at recovery, the original database is marked for byte-carving. The internal binary data is extracted and parsed with the help of Unfurl and blackboxprotobuf. The resulting SQLite database contains converted plain text data, decoded protobufs, and possible timestamp flags to help with manual parsing.
Next Steps
Once the candidates scan is complete, we suggest you run the Data Recovery Comparison report. You can find it under Reports & Visualization from the main menu.
Free Match Mode
Free Match Mode allows you to process SQLite databases without needing a full exemplar scan. This is useful when:
- You don't have access to a reference system
- You're working with custom or unknown database types
- You need to quickly salvage random corrupt SQLite databases
Free Match Options
| Option | Description |
|---|---|
| Free Exemplar Scan | Generate schemas/rubrics from user-provided SQLite files. No catalog used. |
| Free Candidate Scan | Process corrupt SQLites against free exemplars |
| Free Salvage | Quick SQLite recovery without any exemplar. Databases with identical schemas are combined. Corrupt databases are byte-carved. Will also pick up text, plist, or cache files that MARS current scans for. |
Reports & Visualization
MARS generates interactive HTML reports for analyzing recovered data and comparing it against exemplar baselines.
Data Recovery Comparison
The comparison report shows side-by-side analysis of exemplar vs candidate data:
- Database Statistics – Row counts, recovery rates per database
- Recovery Metrics – Overall success rates and coverage
- Timeline Coverage – Date ranges present in exemplar vs recovered
- Activity Heatmaps – Visual representation of data density over time
Chart Plotter
The interactive chart plotter allows you to visualize database columns over time. Unless you know exactly what you want to plot, we recommend you have the database open while plotting.
- Select Chart Plotter from the Reports & Visualization menu
- Add a series, then select an eligible exemplar folder
- Select a database from an exemplar or candidate (if available)
- Only plottable databases are shown (must have timestamp column)
- The number of plottable tables and rows are shown for reference
- Choose a table
- Only tables with a timestamp column are shown
- The number of rows in the selected table are shown for reference
- If the table has more than one timestamp column, you can select which one to use
- Select the first column to plot
- If you're finished, hit enter. Or add an additional series (up to 5)
- You may go back and add a series from a completely different database
- You can also compare exemplar vs candidate databases
- If the table has a text column, it can be added as a tooltip hover label
- Select 'plot selected series', then choose a chart type.
- 'Overlay Line' (up to 2 series)
- 'Stacked Line' (up to 5 series)
- 'Scatter' (up to 2 series)
- 'Bar' (single series)
- Choose a timezone setting
- Local timezone (your current timezone)
- UTC (Coordinated Universal Time)
- Custom offset (e.g., +05:00)
- Select light or dark theme
- Select optional rolling mean data smoothing
- Rolling mean smoothing can help visualize trends in noisy data
- Note that smoothed data can obfuscate forensically important details. Use with caution
- If Kaleido is installed, select whether to export the chart as a static image
- Generate Plotly chart
The exported chart (and optional PNG) can be found in the exemplar's
reports/plots folder.
See the Plotter documentation for details on supported timestamp formats, chart types, and multi-database plotting.
Report Index
The Report Index provides an HTML overview of all scans in your project, with links to individual scan reports and output folders.
Report Modules
MARS includes several built-in report modules to extend data analysis. Each module generates structured output (CSV, JSON, HTML) from specific artifact types.
Current Modules
| Module | What It Shows | README |
|---|---|---|
| WiFi Report | This module parses macOS Wi-Fi and network artifacts to generate an HTML report summarizing network activity. | Documentation |
| Biome Parser | This module parses macOS Biome SEGB files to extract system telemetry and application metrics. | Documentation |
| Firefox Cache Parser | This module parses Firefox's cache2 directory structure to extract HTTP artifacts including URLs, response headers, and cached content. | Documentation |
| Firefox JSONLZ4 Parser | This module salvages Firefox JSONLZ4 (mozLz4) compressed JSON files, extracting sessions, bookmarks, and telemetry data from both intact and carved/truncated files. | Documentation |
Export Data
MARS can package your analysis for use with external forensic tools or share exemplar
data with colleagues. Exports are saved to the exports/ folder in your
project directory.
Export Sources
| Export Type | Contents |
|---|---|
| Exemplar Only | Reference databases from the exemplar scan - your baseline data |
| Candidate Only | Recovered databases from candidate scan (catalog matches only) |
| Combined | Merged exemplar + candidate data. Deduplicates rows across sources while preserving unique recovered data. Best for comprehensive analysis. |
Directory Structure Options
| Option | Result |
|---|---|
| Flat | Essentially, a simplified copy of the MARS folder structure. |
| Full Path |
Preserves original macOS directory structure (e.g.,
Users/admin/Library/Safari/). Useful when path context matters for
analysis.For candidates exports, databases with L&F data that can't be confidently attributed to a specific user will be placed in a _multi user folder.
|
Data Source Tracking
When exporting Combined data, you can optionally add a
data_source column to each table. This column indicates where each row
originated:
exemplar– Row came from the live macOS exemplar scan-
carved_{filename}– Row was recovered from carved candidate databases -
found_{filename}– Row originated inlost_and_founddata
Exemplar Packages
Exemplar Package Export
The Export Data menu allows you to create a shareable bundle containing schemas, rubrics, and database files (all data removed) that can be imported into other MARS projects. This allows teams to share exemplar configurations without transferring full database contents.
Exemplar Package Import
Use the Import Data menu to import an external exemplar package. It will then be available as an exemplar choice from the Candidates Scan menu.
Settings
Project settings allow you to customize scan behavior, configure recovery options, and enable/disable processing modules.
General Settings
| Setting | Description |
|---|---|
| Debug Mode | Enable verbose diagnostic output. Shows detailed processing information in the console. Progress bars are automatically hidden when debug mode is enabled. |
| Save Debug to File |
Write debug output to mars_debug.log in the project folder. Requires
debug mode to be enabled.
|
Exemplar Scan Settings
| Setting | Description |
|---|---|
| Epoch Minimum | Minimum date for valid timestamps (YYYY-MM-DD). Timestamps before this date are ignored during semantic role detection. Default: 2000-01-01 |
| Epoch Maximum | Maximum date for valid timestamps. Timestamps after this are ignored. Default: 2038-01-19 |
| Min Role Sample Size | Minimum non-null values needed before assigning a semantic role (timestamp, UUID, etc.) to a column. Default: 5 |
| Min Timestamp Rows | Minimum timestamp values required to mark a column as a timestamp role. Default: 1 |
| Catalog Groups | Enable/disable specific artifact groups (chrome, quarantine, unified_logs, etc.). |
| Excluded File Types | Skip general file types during scanning (cache, log, keychain, database). |
Candidate Scan / Carver Settings
| Setting | Description |
|---|---|
| Timestamp Start/End | Date range for timestamp tagging during carving. Timestamps outside this range are filtered. Default: 2015-01-01 to 2030-01-01 |
| Timestamp Filter Mode |
strict (confirmed only), balanced (+ likely),
permissive (+ ambiguous), or all. Default: permissive
|
| Decode Protobuf | Attempt to decode protobuf data in BLOB fields. Default: enabled |
| Export CSV | Generate CSV output alongside JSONL for carved data. Default: disabled |
| Pretty JSON | Format protobuf JSON output for readability. Default: enabled |
| Dissect All Variants | Run sqlite_dissect on all variants even without exemplar match. Advanced option. Default: disabled |
Module Management
Enable or disable individual report modules to customize what gets processed during scans.
.marsproj file and restored when you reopen the project.
Artifact Recovery Catalog Manager
The Artifact Recovery Catalog (ARC) defines which files and databases MARS searches for during scans. The ARC Manager allows you to view, edit, and extend the catalog with custom entries.
Catalog Structure
The catalog is organized hierarchically:
- Groups – Top-level categories (Browsers, Mail, System, etc.)
- Targets – Individual artifacts within each group (Safari History, Chrome Cookies, etc.)
- Fields – Properties of each target (glob pattern, scope, file type, etc.)
Required Target Fields
| Field | Description |
|---|---|
| Name | Display name for the artifact (e.g., Safari History) |
| Glob Pattern |
File path pattern using wildcards. Supports * (single level) and
** (recursive). Example:
Users/*/Library/Safari/History.db
|
| Scope |
user (handles User/*/...) or
system (system-wide in /Library or /private/var)
|
| File Type | database, log, or cache |
| Exemplar Pattern |
Output path pattern for exemplar files (e.g.,
databases/catalog/Safari History*)
|
For the complete field list, see the ARC Manager documentation.
Using the ARC Manager
Access via Settings → Artifact Recovery Catalog (ARC) Manager:
- Browse – Navigate groups and view existing targets
- Edit – Modify fields of existing entries
- Add New – Create new entries using the guided wizard
- Delete – Remove entries
Creating Custom Entries
The New Entry Wizard guides you through creating a catalog entry:
- Select group (or create new group)
- Enter artifact name
- Specify glob pattern (validated for correctness)
- Set scope (user/system) and file type
- Add optional description and notes
-
Users/*/Library/Application Support/App/data.db– User-scoped database private/var/db/diagnostics/**/*– Recursive with structureLibrary/Preferences/*.plist– System preferences
For more information, see the ARC Manager documentation.
Catalog File Location
The catalog is stored at:
src/resources/macos/artifact_recovery_catalog.yaml
Custom entries can be added directly to this file or through the ARC Manager UI.
Utilities
Mount EWF/E01 Image
Mount forensic disk images (EWF format) for exploration or scanning.
This feature is currently only available on macOS, and requires Fuse-T installed.
brew tap macos-fuse-t/homebrew-cask
brew install fuse-t
brew install
fuse-t-sshfs
Additional Documents
Links to in-depth information on specific MARS components:
| Topic | Notes |
|---|---|
| MARS Technical Architecture | Extended MARS Technical Architecture |
| MARS Pipeline | File Structure of the MARS scanner pipeline |
| Lost & Found (LF) Processor | Lost & Found (LF) Database Reconstruction |
| Text Fingerprinter | Text "fingerprinting" for WiFi logs, plists, etc. |
| Byte-Carver | Byte-Carver for corrupt databases |
| ARC Catalog | Artifact Recovery Catalog Manager |
| Plotter | Plotly Database Plotter |
| Report Modules | Report Modules overview for developers |
| WiFi Report | Report Module |
| Biome Parser | Report Module |
| Firefox Cache Parser | Report Module |
| Firefox JSONLZ4 Parser | Report Module |
Terminology
Key terms used throughout MARS:
| Term | Definition |
|---|---|
| Exemplar | A reference scan from a known-good system; provides the "source of truth" for database schemas |
| Candidate | Carved or recovered files being analyzed and classified against the exemplar |
| Rubric | A JSON schema definition extracted from an exemplar database, used for matching |
| Catalog | Exact schema matches – recovered databases that perfectly match exemplar schemas |
| Metamatch | Non-catalog matches – recovered candidate databases with intact data. Those with identical schemas are combined and deduplicated |
| Found Data | L&F fragments with match hints (NEAREST) or no match (ORPHAN). Low confidence data requiring manual review - not intact databases |
| Variant | SQLite recovery method: O=Original, C=Clone, R=Recover (.recover command), D=Dissect |
| Lost & Found | SQLite's internal recovery mechanism for damaged database pages |
| Schema Fingerprint | Hash-based identifier enabling O(1) instant matching of database schemas |
| FDA | Full Disk Access – macOS permission required to read protected system databases |
| Package | A shareable exemplar bundle containing schemas, rubrics, and a manifest.json |