workflow refinement

I think I found a better workflow for positioning my reference models.

  1. Expand the subdivision into a mesh, export an .obj from Silo.
  2. Load rough page into Photoshop. Convert from indexed to RGB.
  3. Filter->vanishing point. Select “return 3D layer to Photoshop” in the tiny flyout menu in the upper left; draw a grid to match the sketch, and hit OK.
  4. 3d->new layer from 3D file
  5. Select that 3D layer, then select the camera tool.
  6. In the options bar, go to the “view” dropdown and select the name of the layer we just generated with the ‘vanishing point’ filter. This will snap the camera to the perspective determined by the grid drawn earlier.
  7. Use the object slide tool and the manipulator to position the 3D object to match the sketch.
  8. Export a png or whatever and drop into AI.

Trying to match the camera by fucking around in Silo was a lot of work, and doing it in Blender is just totally not happening. But this seems like it can go pretty quickly, now that I’ve worked out how to do it once!

Leave a Reply