en Source Position Tracking - chiba233/yumeDSL GitHub Wiki

Source Position Tracking

Token Traversal | Error Handling

Your DSL source is a blob of text. After parsing it becomes a token tree β€” but when the user makes a mistake, how do you tell them "line 3, column 12 has a problem"? That's what position tracking does: stamp every token with its exact coordinates in the source.


How it works

                        Source text
                            β”‚
                   trackPositions: true
                            β”‚
                  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                  β–Ό                     β–Ό
           buildPositionTracker     Parser scan
          (scan text once,          (normal parse flow)
           build line table)
                  β”‚                     β”‚
                  β–Ό                     β–Ό
           PositionTracker          On token emit:
             .resolve(offset)       look up table β†’ fill position
           (binary search on
            the line table)
                                        β”‚
                                        β–Ό
                              TextToken.position = {
                                start: { offset, line, column },
                                end:   { offset, line, column }
                              }

When off (default): no table, no coordinates, near-zero cost. When on: table built by scanning the text once, each coordinate lookup is a binary search β€” both fast.


Turning it on

Pass trackPositions: true to parseRichText or parseStructural. Both support it:

  • parseRichText β†’ coordinates appear on TextToken.position
  • parseStructural β†’ coordinates appear on StructuralNode.position
const tokens = parseRichText("hello $$bold(world)$$", {
    handlers: { bold: { inline: (t, ctx) => ({ type: "bold", value: t }) } },
    trackPositions: true,
});

// tokens[0].position β†’ { start: {offset:0, line:1, column:1}, end: {offset:6, line:1, column:7} }
// tokens[1].position β†’ { start: {offset:6, line:1, column:7}, end: {offset:21, line:1, column:22} }

What coordinates look like

SourcePosition

interface SourcePosition {
    offset: number;   // 0-indexed, UTF-16 code unit offset
    line: number;      // 1-indexed
    column: number;    // 1-indexed
}

SourceSpan

interface SourceSpan {
    start: SourcePosition;
    end: SourcePosition;
}

Every token's position is a SourceSpan β€” marking where it starts and ends in the source.


buildPositionTracker(text)

function buildPositionTracker(text: string): PositionTracker

The engine behind position tracking. What it does isn't complicated:

  1. Scan the text once, recording the offset after every newline (i.e. the first character of each line) into a line-offset table
  2. Return a PositionTracker with a resolve(offset) method
  3. You give resolve an offset, it does a binary search on the line-offset table and tells you which line and column that offset falls on
interface PositionTracker {
    resolve(offset: number): SourcePosition;
}

Important: build it once and reuse it. Don't rebuild per slice β€” the next section explains why.


Parsing Substrings: baseOffset and tracker

In practice you often need to parse just a portion of a larger document β€” say, extracting a DSL block from a Markdown file. Coordinates get tricky: the parser sees a slice, but you want error positions to point at the original document.

Full document:  "first line\nprefix $$bold(world)$$ suffix"
                                    ↑ offset 18
Slice:          "$$bold(world)$$"
                    ↑ offset 0 in slice, but 18 in the original

Two modes compared

Mode A: baseOffset only               Mode B: baseOffset + tracker (recommended)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ offset βœ… shifted      β”‚              β”‚ offset βœ… shifted      β”‚
β”‚ line   ❌ local to sliceβ”‚             β”‚ line   βœ… original doc β”‚
β”‚ column ❌ local to sliceβ”‚             β”‚ column βœ… original doc β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Recommended approach

const fullText = "first line\nprefix $$bold(world)$$ suffix";
const tracker = buildPositionTracker(fullText);  // build once!
const start = 18;
const slice = fullText.slice(start, 33);

const tokens = parseRichText(slice, {
    handlers: { bold: { inline: (t, ctx) => ({ type: "bold", value: t }) } },
    trackPositions: true,
    baseOffset: start,   // tell the parser where the slice starts
    tracker,             // use the original doc's line table for line/column
});

// tokens[0].position.start.offset β†’ 18  (absolute in original)
// tokens[0].position.start.line   β†’ 2   (line 2 in original)
// tokens[0].position.start.column β†’ 8   (column 8 in original)

This exact example was re-checked on the current build and still resolves to:

  • offset: 18
  • line: 2
  • column: 8

So the rule remains:

  • baseOffset shifts the slice back into the original document
  • tracker determines whether line / column also resolve against the original document

If you omit one of the pieces, behavior degrades immediately:

  • Omit trackPositions
    • no position is produced at all
    • baseOffset and tracker have no effect
  • Pass only trackPositions: true
    • you get position
    • but coordinates are still local to the slice
  • Pass trackPositions: true + baseOffset
    • offset shifts back to the original document
    • line / column still stay slice-local
  • Pass trackPositions: true + baseOffset + tracker
    • only then do offset, line, and column all map fully back to the original document

See the drift directly: without tracker vs with tracker

The easiest way to remember this is not the "recommended" version, but the broken one.

Take the same full text:

first line
prefix $$bold(world)$$ suffix

We only parse the slice "$$bold(world)$$", which starts at offset 18 in the original document.

❌ baseOffset only, no tracker

const tokens = parseRichText(slice, {
    handlers: { bold: { inline: (t, ctx) => ({ type: "bold", value: t }) } },
    trackPositions: true,
    baseOffset: 18,
});

// tokens[0].position.start
// β†’ { offset: 18, line: 1, column: 1 }

This is the trap:

  • offset already looks correct
  • but line / column are still slice-local
  • so it feels like positions were mapped back, but only half of that mapping actually happened

βœ… baseOffset + tracker

const tracker = buildPositionTracker(fullText);

const tokens = parseRichText(slice, {
    handlers: { bold: { inline: (t, ctx) => ({ type: "bold", value: t }) } },
    trackPositions: true,
    baseOffset: 18,
    tracker,
});

// tokens[0].position.start
// β†’ { offset: 18, line: 2, column: 8 }

Now all three fields are correct together:

  • offset β†’ absolute offset in the original document
  • line β†’ line 2 in the original document
  • column β†’ column 8 in the original document

The short version:

  • without tracker: positions drift, especially line / column
  • with tracker: errors and highlights point back to the real source

parseRichText vs parseStructural position differences

The two APIs have different position semantics: parseRichText reflects the normalized render range, parseStructural reflects the raw source range.

Hard maintenance rule: share base config and low-level tracker utilities if useful, but do not try to unify their final SourceSpan settlement.

This means the same source gives different position.end values. Given $$info()*\nhello\n*end$$\nnext (27 chars):

                          parseRichText end
                          ↓
$$info()*\nhello\n*end$$\n next
                        ↑
                        parseStructural end
API info's end.offset Why
parseRichText 23 (consumes trailing \n) Block normalization eats the newline
parseStructural 22 (stops at $$) Raw syntax position, no normalization

Block children too: parseRichText adjusts offsets for leading-newline normalization; parseStructural gives you raw positions.


Performance

Parser-core benchmarks are now centralized on Performance. This page mainly keeps the position-tracking-specific cost model and examples.

With tracking off, there's no extra cost at all. With it on, the impact is small for most use cases.

State Cost
trackPositions: false (default) No line table, no position objects, near-zero overhead
trackPositions: true Line table built by scanning the text once, each lookup is a binary search

Tracking overhead

This section is now fixed to the 1.1.6 baseline and will not be re-measured for later patch releases.

Measured on 1.1.6 with ~200 KB input (204,840 bytes). Test environment: HiSilicon TaiShan-v110 (Kunpeng 920) 24-core aarch64 / 32 GB / Node v24.14.0. 20 samples per case.

API Without tracking With tracking Overhead
parseRichText ~22.45 ms ~34.07 ms ~51.8%
parseStructural ~14.88 ms ~18.49 ms ~24.3%

The 1.1.6 measurements show:

  • parseRichText tracking still has a visible cost, but it remains within a normal editor budget
  • parseStructural tracking is cheaper, though not free
  • if a pipeline needs tighter budgeting, parseRichText + trackPositions is the first path to inspect

Why is tracking more expensive in parseRichText?

Because it is not just "the same tracking work as parseStructural, plus tokens". It also has to carry positions through render semantics and normalize them for the final output contract.

  • parseStructural mainly pays for two things:
    • scan the text once to build a PositionTracker
    • attach raw-source positions to structural nodes
  • parseRichText pays for both of those, and then adds render-layer work on top:
    • map structural positions onto final TextToken.position
    • maintain spans while adjacent text tokens are merged
    • adjust offsets again when block normalization trims leading/trailing line breaks

The key point is that parseRichText.position and parseStructural.position are not the same contract.

  • parseStructural.position reports raw source ranges
  • parseRichText.position reports normalized render ranges

So the higher tracking cost in parseRichText is expected: its position semantics are heavier by design.


Incremental parsing in practice: how fast is parseSlice?

The benchmarks above show that "tracking itself is cheap". But the real payoff is: combined with parseSlice (from yume-dsl-token-walker), you can re-parse only the changed region instead of the whole document.

Scenario

This section is also fixed to the 1.1.6 baseline.

The same ~200 KB document (204,840 bytes). The user edits one 36-character $$bold(...)$$ tag in the middle. You need the updated token tree.

Three strategies compared

Strategy A: full parseRichText
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Re-parse the entire 200 KB with parseRichTextβ”‚
β”‚  β‰ˆ 19.45 ms                                   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Strategy B: full parseStructural (rebuild every time)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Similar speed to A, but gives you the        β”‚
β”‚  structural tree for incremental updates      β”‚
β”‚  β‰ˆ 18.85 ms                                   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Strategy C: parseStructural (cached) + parseSlice (incremental)  ← recommended
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  First time: parseStructural to build tree    β”‚
β”‚  β‰ˆ 18.85 ms                                   β”‚
β”‚  Each subsequent edit:                        β”‚
β”‚    1. nodeAtOffset to locate the node         β”‚
β”‚       β‰ˆ 0.457 ms                              β”‚
β”‚    2. parseSlice to parse that node only      β”‚
β”‚       β‰ˆ 0.008 ms                              β”‚
β”‚                                               β”‚
β”‚  Incremental update total β‰ˆ 0.465 ms          β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Measured data

Step Time Notes
Full parseRichText ~19.45 ms Full 200 KB parse
Full parseStructural + tracking ~18.85 ms Rebuild structural tree plus positions
nodeAtOffset locate ~0.457 ms Traverse the old structural tree and locate the hit node
parseSlice incremental parse ~0.008 ms Parse only the 36-character slice
buildPositionTracker rebuild ~0.997 ms Full-text line table scan (only when newlines change)
Incremental total (locate + slice) ~0.465 ms Cursor-local reparse over the touched region

Why parseSlice is still faster

parseRichText is nearly as fast as parseStructural (~19.45 ms vs ~18.85 ms on 200 KB). However, both still scan the entire document. parseSlice extracts just the changed node's span (36 characters here) and runs parseRichText on that tiny slice β€” the other 200 KB+ is never touched.

parseSlice's cost is proportional to the slice size, not the document size. A 36-char node takes ~0.008 ms whether it's in a 10 KB or 200 KB document. For keystroke-level real-time editing, that means you can keep reparsing work local instead of rescanning the whole document on every update.

Code: full vs incremental

import { createParser, createSimpleInlineHandlers, buildPositionTracker } from "yume-dsl-rich-text";
import { parseSlice, nodeAtOffset } from "yume-dsl-token-walker";

const parser = createParser({
    handlers: createSimpleInlineHandlers(["bold", "italic", "color"]),
});

// ── 200 KB document ──
let fullText = buildLargeDocument(); // ~200 KB of DSL text

// ═══════════════════════════════════════════
// Strategy A: full β€” re-parse everything on each edit
// ═══════════════════════════════════════════
const tokensA = parser.parse(fullText);
// β‰ˆ 19.45 ms, paid on every single edit

// ═══════════════════════════════════════════
// Strategy C: incremental β€” build tree once, re-parse only the changed region
// ═══════════════════════════════════════════

// Step 1: build structural tree (once)
let tree = parser.structural(fullText, { trackPositions: true });
// β‰ˆ 18.85 ms

// Step 2: build tracker (once)
let tracker = buildPositionTracker(fullText);

// ── User edits content near offset 105407 ──
const editOffset = 105407;
fullText = applyEdit(fullText, editOffset, "old", "new");

// Step 3: locate which node the edit falls in (β‰ˆ 0.457 ms)
const hitNode = nodeAtOffset(tree, editOffset);

// Step 4: parse only that node (β‰ˆ 0.008 ms)
if (hitNode?.position) {
    const freshTokens = parseSlice(fullText, hitNode.position, parser, tracker);
    // freshTokens have correct offset/line/column pointing to the original text
}

Key takeaways

  1. Build the tracker once. buildPositionTracker scans the full text to build a line-offset table (200 KB β‰ˆ 1.00 ms). If the newline structure hasn't changed (only inline content edited), the old tracker is still valid. When newlines are inserted/deleted, rebuild β€” but that rebuild cost is still trivial compared to a full re-parse.

    If you omit the tracker, parseSlice still works, but the returned tokens only have the correct offset; line / column fall back to slice-local coordinates instead of pointing directly into the original document.

  2. parseStructural gives you the structural tree. parseRichText and parseStructural run at similar speeds (~19.45 ms vs ~18.85 ms on 200 KB). The reason to use parseStructural in the incremental pipeline is not speed β€” it is that parseStructural produces the structural tree needed by nodeAtOffset to locate which node was edited, enabling parseSlice to re-parse only that node.

    If you do not enable trackPositions on parser.structural(...), structural nodes have no position, so the nodeAtOffset / parseSlice pipeline no longer has the SourceSpan it needs.

  3. parseSlice cost scales with slice size, not document size. A 36-char node takes the same time in a 10 KB or 200 KB document.

  4. Positions auto-map back to the original text. With a tracker, parseSlice returns tokens whose position.line / position.column point directly at the original full-text coordinates β€” no manual conversion needed.

When to use it (and when not to)

Scenario Recommendation
One-shot parsing (SSG, build time) Just use parseRichText directly, no need for a two-step pipeline
Editor live preview, document < 200 KB Just use parseRichText directly β€” ~19.45 ms on 200 KB is well within interactive budget
Editor live preview, document > 500 KB parseStructural + parseSlice, full re-parse may exceed frame budget
Per-keystroke re-render Use incremental β€” even ~24.2 ms per keystroke adds perceptible lag at typing speed
Batch-linting hundreds of files parseStructural + lintStructural is the right choice for structure-only analysis