The source code has Git, but large files are still stuck in shared drives, zip files, Git LFS pain, complete re-uploads, and cluttered S3 folders.
ClearMesh gives Git-like workflow to large binary folders.
You can commit a dataset, model checkpoint, media folder, CAD export, research output, or any large project folder, push it as pieces to object storage, clone or sync it to another machine, browse it in the web UI, and mount it read-only when a tool needs the common file path.
The practical win is that ClearMesh doesn’t treat every version like a brand new upload. Files are divided into fragments, so when only part of a large folder changes, the unchanged fragments can be reused instead of being uploaded and stored.
Simple example:
Let’s say your team has a 100GB dataset.
Without the versioned chunk workflow, three separate 100 GB folders could be created in the version 1, version 2, and version 3 object storage. Even if only 5GB is changed each time, you’re still storing and transferring a lot of duplicate data.
With ClearMesh, unchanged parts can be reused across versions, so the new version stores mostly the changed parts. When another machine syncs the repo, it may discard the already existing fragments instead of pulling the same bytes again.
Read-only mount helps when you don’t need the entire repo immediately. Instead of downloading the full 100GB folder before doing anything, you can mount the repo as a normal read-only directory. Your tools see regular files and paths, while ClearMesh gets the underlying segments only when those byte ranges are read.
If a script only reads configuration, metadata, previews, or part of a larger artifact, it doesn’t need to pull everything first.
If a tool reads the entire file, ClearMesh will bring out all the necessary parts. This isn’t magical zero-download storage; This is on-demand access through normal file system paths.
For teams dealing with datasets, model artifacts, media, VFX assets, CAD exports, or research folders, this might mean:
• Low duplicate storage across all versions
• Less full-folder re-uploads
• Less frequent transfers and exhaust waste
• Fast repetitive sync when most files remain unchanged
• Clean history for large artifacts
• Simple S3/R2-compliant storage below instead of black box
I tested the mount with a real GGUF model file: the mounted file hash matched the original, random reads matched, boundary reads matched, and llama.cpp loaded it from the mounted path.
ClearMesh includes:
• Rust CLI
• Fragmented storage
• S3/R2-compliant Vault
• Commits and branches
• Clone and sync
• Read-only mount
• Optional client-side encryption
• Web Repo Browser
I’m launching it here because I want feedback from people who deal with large files every day.
Thanks for taking a look.
<a href