GitHub – shivampkumar/trellis-mac · GitHub

Run TRELLIS.2 image-to-3D generation natively on Mac.

It is a port of Microsoft’s TRELLIS.2 – a state-of-the-art image-to-3D model – from CUDA-only to Apple silicon via PyTorch MPS. No NVIDIA GPU required.

produces 400K+ top mesh from single images in ~3.5 minutes on M4 Pro.

The output includes vertex-colored OBJ and GLB files ready for use in 3D applications.

  • macOS on Apple Silicon (M1 or later)
  • Python 3.11+
  • 24GB+ integrated memory recommended (4B model is larger)
  • ~15GB disk space for model load (first time download)
# Clone this repo
git clone https://github.com/shivampkumar/trellis-mac.git
cd trellis-mac

# Log into HuggingFace (needed for gated model weights)
hf auth login

# Request access to these gated models (usually instant approval):
#   https://huggingface.co/facebook/dinov3-vitl16-pretrain-lvd1689m
#   https://huggingface.co/briaai/RMBG-2.0

# Run setup (creates venv, installs deps, clones & patches TRELLIS.2)
bash setup.sh

# Activate the environment
source .venv/bin/activate

# Generate a 3D model from an image
python generate.py path/to/image.png

Output files are saved in (or use) the current directory. --output to specify the path).

# Basic usage
python generate.py photo.png

# With options
python generate.py photo.png --seed 123 --output my_model --pipeline-type 512

# All options
python generate.py --help

Option default Description
--seed 42 random seed for generation
--output output_3d Output file name (without extension)
--pipeline-type 512 Pipeline Resolution: 512, 1024, 1024_cascade

TRELLIS.2 depends on several CUDA-only libraries. This port replaces them with pure-Pytorch and pure-Python alternatives:

Origin (CUDA) replacement Objective
flex_gemm backends/conv_none.py Sparse 3D Convolution via Gather-Scatter
o_voxel._C hash map backends/mesh_extract.py mesh extraction from dual tone grid
flash_attn pytorch sdpa Scaled dot-product attention for sparse transformers.
cumesh stub (skip graceful) Hole filling, mesh simplification
nvdiffrast stub Differential Graphing (Texture Export)

Additionally, all hardcoded .cuda() The calls throughout the codebase were patched to use active devices instead.

sparse 3d convolution (backends/conv_none.py): implements submanifold sparse convolution by creating a spatial hash of active voxels, aggregating neighboring features for each kernel position, applying weights via matrix multiplication, and scattering-adding the results back. Neighbor maps are cached per-tensor to avoid unnecessary computation.

take out the net (backends/mesh_extract.py): re-applies flexible_dual_grid_to_mesh Using Python dictionaries instead of CUDA hashmap operations. Creates a coordinate-to-index lookup table, finds associated voxels for each edge, and triangulates the quad using normal alignment projections.

Attention (patched full_attn.py): Adds an SDPA backend to the sparse attention module. Pads variable-length sequences in batches, runs torch.nn.functional.scaled_dot_product_attentionThen unpad the result.

Benchmarks on M4 Pro (24GB), pipeline type 512: :

stage Time
Loading Model ~45s
image preprocessing ~5s
sparse structure sampling ~15s
size slat sampling ~90s
texture slat sampling ~50s
mesh decoding ~30s
Total ~3.5 min

Memory usage during generation is approximately 18GB of integrated memory.

  • no texture export: Requires textured baking nvdiffrast (CUDA-only variational rasterizer). Meshes are exported with vertex colors only.
  • Unable to fill holes: Mesh holes need to be filled cumesh (CUDA). There may be small holes in the mesh.
  • slower than CUDA: Pure-PyTorch sparse convolution is ~10x slower than CUDA flex_gemm Fell. This is the main obstacle.
  • no training support: Only guess.

The porting code (backend, patches, scripts) in this repository is released under the MIT License.

Upstream model weights are subject to their own licenses:

  • TRELLIS.2 by Microsoft Research – original model and codebase
  • DINOv3 by Meta – Image Feature Extraction
  • RMBG-2.0 by BRIA AI – Background Removal



<a href

Leave a Comment