260 lines
7.8 KiB
Markdown
260 lines
7.8 KiB
Markdown
# Goal
|
||
Create a "profile", i.e. a JSON file, to be used by the script that mass converts *.glb to *.png
|
||
|
||
## Introduction
|
||
This exercise only requires that you launch a small HTTP server in a console. Otherwise, everything involved is handled through the HTML page lab.html. You will interact with lab.html's 3D rendering of a glb file that is included with this project.
|
||
|
||
## Steps
|
||
Open a browser or a new window of the browser (Ctrl-n in Firefox) and resize the browser to a small rectangle. You are reducing the size so as to mimic what the PNG cell in the manifest table will look like. For example, this reduced window
|
||
is 521 × 432 pixels.
|
||

|
||
|
||
In a console:
|
||
|
||
cd ~/work/Voron/voronstl/web
|
||
python3 -m http.server 8001
|
||
|
||
You should have a console that looks like this:
|
||
|
||

|
||
|
||
It is necessary to start the web server within the "web" directory as that directory
|
||
will be servers "root".
|
||
|
||
|
||
Visit:
|
||
|
||
http://localhost:8001/lab.html
|
||
|
||
|
||
You will see a zoomed-in image:
|
||

|
||
|
||
Zoom out until the entire part fits within the window.
|
||
|
||
Click the Controls bar to collapse the sub menus.
|
||

|
||
Move the object to center it in the window: Shift + left mouse button. You want to have the entire part fit within the view and be cenetered.
|
||
|
||
Click Controls bar to open the sub menus. Adjust the lighintensity to a high value, if not the maximum values. This will cause the image to go lighter allowing for contrast with shadows that help discern the part.
|
||

|
||
Optional: Save the PNG for your own reference.
|
||
|
||
Click "Export Profile" and save your current settings.
|
||
|
||

|
||
|
||
You now have a specification on sizing and angles which may work well for all of the other parts. Note: I took mine and applied the specifications saved above for a mass PNG creation and all the others looked very good.
|
||
|
||
### Additional Information:
|
||
Here's what a JSON file looks like:
|
||
|
||
<table style="width:100%; table-layout:fixed;">
|
||
<tr>
|
||
<th>Image</th>
|
||
<th>Description</th>
|
||
</tr>
|
||
|
||
<tr>
|
||
<td style="width:50%; vertical-align:top;">
|
||
<img src="20260304_102919_Wed.png" >
|
||
</td>
|
||
|
||
<td style="width:50%; vertical-align:top;"><pre style="white-space:pre-wrap; overflow:auto; max-width:100%;">jlpoole@jp ~/work/Voron/voronstl/web $ jq . out/three_profile_20260304_102657.json
|
||
{
|
||
"provenance": "lab.html exportProfile Wed Mar 04 2026 10:26:57 GMT-0800 (Pacific Standard Time)",
|
||
"output": {
|
||
"width": 500,
|
||
"height": 315,
|
||
"pixelRatio": 1
|
||
},
|
||
"scene": {
|
||
"background": 16777215
|
||
},
|
||
"camera": {
|
||
"type": "PerspectiveCamera",
|
||
"fov": 50,
|
||
"near": 0.1,
|
||
"far": 1000,
|
||
"position": [
|
||
11.93432933230491,
|
||
11.71721921868296,
|
||
13.265804443912849
|
||
],
|
||
"up": [
|
||
0,
|
||
1,
|
||
0
|
||
]
|
||
},
|
||
"controls": {
|
||
"target": [
|
||
2.075876663103527,
|
||
-2.147814989328729,
|
||
0.7122034073683343
|
||
]
|
||
},
|
||
"renderParams": {
|
||
"wireframe": false,
|
||
"edgeAngle": 30,
|
||
"lightIntensity": 1.5
|
||
},
|
||
"lights": {
|
||
"directional": {
|
||
"position": [
|
||
5,
|
||
5,
|
||
5
|
||
],
|
||
"intensity": 1.5
|
||
},
|
||
"ambient": {
|
||
"intensity": 0.6
|
||
}
|
||
}
|
||
}
|
||
jlpoole@jp ~/work/Voron/voronstl/web $
|
||
|
||
</td>
|
||
</tr>
|
||
</table>
|
||
|
||
## ChatGPT's analysis of what is important for consistency
|
||
|
||
Here is a response from ChatGPT that summarizes what variables are captured and saved so that they can be used to repeat the same type of transformation of *.glb to PNG.
|
||
|
||
## Response
|
||
|
||
Good — now you’re at the point where it’s worth making the capture **repeatable** across a whole directory of GLBs.
|
||
|
||
In practice, you want to “freeze” *everything that affects the pixels* and treat the GLB as the only variable.
|
||
|
||
## What to save (so every PNG is consistent)
|
||
|
||
### 1) Camera pose and projection
|
||
|
||
Save:
|
||
|
||
* `camera.position` (x,y,z)
|
||
* `camera.up` (x,y,z)
|
||
* `controls.target` (x,y,z) if you’re using OrbitControls
|
||
* `camera.fov`, `camera.near`, `camera.far`
|
||
* (optional) whether you used perspective vs orthographic
|
||
|
||
These define “where the viewer is” and are the biggest determinant of repeatability.
|
||
|
||
### 2) Output framing rules (per-model normalization)
|
||
|
||
You need a deterministic rule for “how big the part appears in frame”, otherwise parts with different sizes will look random.
|
||
|
||
Two common strategies:
|
||
|
||
**A. Fit-to-frame (recommended for mass renders)**
|
||
For each GLB:
|
||
|
||
* compute `Box3` bounds
|
||
* compute `center` and `size`
|
||
* set `controls.target = center`
|
||
* set camera distance so the whole bounding box fits (using FOV + max dimension)
|
||
* optionally lock azimuth/elevation (so every part is viewed from the same angle)
|
||
|
||
This yields consistent “fill” and eliminates manual tuning.
|
||
|
||
**B. Fixed camera (only if all parts are same scale and you want exact same viewpoint)**
|
||
Works if your GLBs are all in the same coordinate system/scale and similarly sized.
|
||
|
||
### 3) Lighting rig
|
||
|
||
Save:
|
||
|
||
* light types (Directional/Hemisphere/Ambient)
|
||
* each light’s `position`, `intensity`, `color`
|
||
* shadow settings if used
|
||
|
||
Your current screenshot shows a simple shaded render; keep lighting stable or you’ll get different highlight/shadow shapes across parts.
|
||
|
||
### 4) Renderer and capture settings
|
||
|
||
Save:
|
||
|
||
* `width`, `height`
|
||
* `devicePixelRatio` you used (often set explicitly for determinism)
|
||
* `renderer.toneMapping`, `outputColorSpace` (Three.js r152+ uses `renderer.outputColorSpace = THREE.SRGBColorSpace`)
|
||
* `renderer.setClearColor()` (white) and alpha behavior
|
||
* `preserveDrawingBuffer: true` (for browser capture)
|
||
* whether you use `physicallyCorrectLights` (if you do, keep it constant)
|
||
|
||
### 5) Material overrides / wireframe settings
|
||
|
||
If you’re overriding materials for “clay render” or wireframe, save:
|
||
|
||
* your override material parameters (`MeshStandardMaterial` vs `MeshPhongMaterial`, roughness/metalness)
|
||
* edge/wire settings (`EdgesGeometry` threshold angle, line material linewidth/color)
|
||
|
||
### 6) Background / staging
|
||
|
||
Save:
|
||
|
||
* background color (white)
|
||
* whether you add a ground plane, axes, environment map, etc.
|
||
|
||
Even “invisible” helpers can affect auto-framing if they’re included in bounds—so exclude them from bounds computation.
|
||
|
||
---
|
||
|
||
## The best thing to save: a small “render profile” JSON
|
||
|
||
Put this in a file like `render_profile.json`:
|
||
|
||
* canvas size
|
||
* camera angle (azimuth/elevation) OR explicit camera position
|
||
* fit-to-frame padding factor (e.g., 1.15)
|
||
* light positions/intensities
|
||
* material mode (`source`, `clay`, `wireframe`, `edges`)
|
||
* background color
|
||
|
||
Then your batch renderer (browser or headless) loads profile + GLB and produces PNG.
|
||
|
||
Example shape (conceptually):
|
||
|
||
```json
|
||
{
|
||
"output": { "w": 1600, "h": 1200, "pixelRatio": 1 },
|
||
"background": "#ffffff",
|
||
"camera": { "fov": 35, "near": 0.01, "far": 1000, "azimuth": -35, "elevation": 20, "fitPadding": 1.15 },
|
||
"lights": [
|
||
{ "type": "dir", "pos": [3,5,2], "intensity": 2.8 },
|
||
{ "type": "hem", "sky": "#ffffff", "ground": "#666666", "intensity": 0.6 }
|
||
],
|
||
"materialMode": "clay"
|
||
}
|
||
```
|
||
|
||
---
|
||
|
||
## How to mass-create PNGs (two viable routes)
|
||
|
||
### Route 1: Headless batch (best)
|
||
|
||
Use **node + headless WebGL** (or Puppeteer driving Chrome) to render each GLB with the profile and write PNGs. This is what you want if you’re doing hundreds/thousands.
|
||
|
||
### Route 2: Browser batch page (simpler)
|
||
|
||
A page that:
|
||
|
||
* loads a list of GLB URLs
|
||
* applies the profile
|
||
* renders each, downloads PNG automatically (or zips them)
|
||
|
||
This is simpler but less pleasant at scale.
|
||
|
||
---
|
||
|
||
## One key question that determines the “right” profile
|
||
|
||
Do you want your PNGs to be:
|
||
|
||
* **same camera angle + fit-to-frame** (parts all fill the image similarly), or
|
||
* **same absolute camera** (so relative size differences are visible)?
|
||
|
||
If you tell me which you want, I’ll give you a concrete “profile JSON + camera fit math + batch driver” that matches it.
|