Generators - Kaisei-Fukaya/Graphical-Asset-Generation-Mockup-Files GitHub Wiki

Generator nodes produce new assets by interpreting inputs.

Image from Text

The Image from Text node produces bitmap images based on provided text prompts.

  • Different machine-learning models can be selected from the "Models" dropdown.
  • For example models may be similar to DALL·E 2 or Stable Diffusion
  • The default resolution is 512x512 pixels, however you can provide your own resolution values via the "Resolution Width" and "Resolution Height" inputs.
Graph view Inspector view
image image

Mesh from Photo

The Mesh from Photo node produces textured meshes from photographic images.

  • The "Input Type" can be switched between "Single-view" and "Multi-view".
  • When set to "Single-view" a single photo input is required.
  • When set to "Multi-view" four photo inputs are required, with each representing a subject at different viewing angles. This results in higher quality outputs.
  • Different machine-learning models can be selected from the "Models" dropdown.
  • These models specialise in specific types of asset so it is important to pick the correct one for the task.
  • By default this node will provide one output per input, however you can specify how many results to generate per input via the "Amount to generate" port.
Graph view Inspector view
image image

Mesh from Sketch

The Mesh from Photo node produces meshes from hand-drawn images.

  • The "Input Type" can be switched between "Single-view" and "Multi-view".
  • When set to "Single-view" a single sketch input is required.
  • When set to "Multi-view" four sketch inputs are required, with each representing a subject at different viewing angles. This results in higher quality outputs.
  • Different machine-learning models can be selected from the "Models" dropdown.
  • These models specialise in specific types of asset so it is important to pick the correct one for the task.
  • By default this node will provide one output per input, however you can specify how many results to generate per input via the "Amount to generate" port.
Graph view Inspector view
image image

Interpolator 3D

The Interpolator 3D node generates a new textured mesh based on two provided textured meshes.

  • Different machine-learning models can be selected from the "Models" dropdown.
  • These models specialise in specific types of asset so it is important to pick the correct one for the task.
  • A slider determines the similarity of the output asset to either input. By default this is in the middle, meaning that outputs will have "equal" similarity to both inputs.
Graph view Inspector view
image image

Style Transfer 2D

The Style Transfer 2D node generates an image by combining the content and style of two different bitmap inputs.

  • Different machine-learning models can be selected from the "Models" dropdown.
  • For example models may be similar to this.
Graph view Inspector view
image image

Style Transfer 3D

The Style Transfer 3D node generates a textured mesh by combining the content and style of two different textured mesh inputs.

  • Different machine-learning models can be selected from the "Models" dropdown.
  • For example models may be similar to this.
Graph view Inspector view
image image
⚠️ **GitHub.com Fallback** ⚠️