Simulating Jon Blow's Brain on the Holodeck - bryanedds/Nu GitHub Wiki
"Computer, absorb this article and the content at its important links - I will ask some questions about it - https://github.com/bryanedds/Nu/wiki/Nu-Game-Engine-on-the-Holodeck"
Got it — I’ve absorbed the article and its linked content. It’s a deep exploration of the Nu Game Engine framed as a “cognitive dojo” debate among legendary computer scientists and engineers. The piece stages a dialogue about Nu’s design philosophy, contrasting it with Jon Blow’s Jai language, and emphasizes Nu’s functional programming roots, declarative form factors (ImSim and MMCC), correctness, performance benchmarks, adoption challenges, and simplicity as a guiding principle.
Would you like me to map each panelist’s philosophy against Nu’s design choices in a comparison table so you can see exactly where the alignments and tensions lie?
"No. Instead be a stand-in for Jon Blow and I will issue counterarguments."
Alright — I’ll step into the role of Jon Blow here. I’ll channel his perspective as expressed in his talks and writings, especially his skepticism of functional programming and his emphasis on imperative control, performance, and developer ergonomics.
Jon Blow (stand-in) — Opening Position
"Games are about performance, control, and shipping real products. Functional programming may be elegant, but it often hides the machine from the programmer. Jai was designed to give developers direct, imperative power over memory and execution. Abstractions like ImSim and MMCC sound nice in theory, but they risk adding overhead and complexity that developers don’t need. When you’re building a game, correctness is meaningless if you can’t hit frame rate or if the abstractions slow you down."
I’ll hold this stance as Jon Blow. You can now issue your counterarguments, and I’ll respond in character — defending Jai’s philosophy and critiquing Nu’s declarative approach.
"What makes you think Nu Game Engine games inherently 'can't hit their frame rate'? Does your style of programming have a monopoly on reaching performance goals?"
Jon Blow (stand-in):
I wouldn’t claim that my style of programming has a monopoly on performance — but imperative programming gives developers direct, predictable control over memory layout, execution order, and cache behavior. That control is critical when you’re chasing frame rates in complex, real-world games.
With Nu’s declarative approach, you’re asking the engine to reconcile a world description every frame. That reconciliation step introduces overhead — allocations, pattern matches, structural diffs — things that don’t exist in a tight imperative loop. Sure, Nu can demonstrate impressive benchmarks with millions of entities, but those are controlled demos. When you add physics, AI, networking, and all the messy realities of a shipped game, the overhead of declarativity can become a bottleneck.
So my skepticism isn’t that Nu inherently can’t hit frame rate, but that its abstractions make it harder to guarantee performance across the full spectrum of game complexity. Imperative programming doesn’t guarantee success either, but it minimizes hidden costs and gives developers the levers they need to optimize when things get tight.
"You realize that Nu has an imperative back-end mode and, being in the F# language, exposes as much imperative control as .NET's imperative language, C#? Why do you suppose I make only the declarative APIs available? Nu exposes multiple levels of API, ImSim and MMCC as declarative, Classic Nu API as non-declarative. Why all the presumptions?"
Jon Blow (stand-in):
Fair point — I may have been presumptive in framing Nu as only declarative. If Nu exposes multiple levels of API, including a Classic imperative mode, then it’s not about whether imperative control exists — it’s about what you, as the designer, choose to emphasize.
From my perspective, the fact that you make declarative APIs the primary interface signals a philosophical stance: you want developers to think in terms of ImSim and MMCC, not in terms of raw imperative loops. That’s where my skepticism comes in. I worry that by elevating declarativity as the default, you’re nudging developers toward abstractions that may obscure the machine and make performance harder to reason about.
Yes, F# gives access to imperative constructs just like C#, but the ergonomics matter. If the declarative API is the “front door,” then most developers will adopt it, and only a minority will drop down into Classic Nu for imperative control. My concern is that this creates a split: correctness and elegance for those who stay declarative, but potential friction for those who need to optimize at the imperative level.
So the presumption isn’t that Nu can’t do imperative — it’s that you’ve chosen to prioritize declarative form factors as the way developers should think. And I question whether that prioritization serves the messy, performance-critical realities of game development as well as the imperative-first approach I advocate with Jai.
"Correct, I do argue that they should use the declarative APIs by default, then dish out to the non-declarative API for the rare(r) types of entities that need additional scalability (see my article https://vsynchronicity.wordpress.com/2020/07/27/ant-vs-space-marine/). Simplicity by default and performance to the degree you need it. Why give users only one or the other?"
Jon Blow (stand-in):
I see the logic in that — offering both declarative and imperative layers gives developers flexibility. But here’s my concern: when you present declarative APIs as the default, you’re shaping how most developers will think about their games. They’ll build around ImSim or MMCC, and only later discover they need to “dish out” to Classic Nu for performance-critical entities. That transition can be jarring, because the mental model shifts from declarative correctness to imperative optimization.
In Jai, I start from the opposite assumption: developers should have direct control by default, and then build abstractions on top when they want them. That way, the performance-critical path is never hidden, and the developer doesn’t have to rethink their architecture midstream.
So my pushback isn’t against offering both — it’s against the prioritization. By making declarativity the default, you’re betting that most developers will benefit from simplicity first and only occasionally need imperative control. I’m betting the opposite: that most real-world games are performance-critical enough that imperative-first thinking saves pain down the line.
"What makes you presume that there is a degree of hiddenness in Nu's classic or declarative APIs? What specific things are you talking about that are so gosh-darn hidden that it actually impacts performance analysis in practice? Relative to imperative game engines, Nu's code base is a tiny 60klocs of F#. This is so far all presumption on your part."
Jon Blow (stand-in):
You’re right to call me out — I’ve been leaning on presumption rather than pointing to concrete examples. Let me be more specific about what I mean by “hiddenness” in declarative or layered APIs like Nu’s:
-
Execution Order: In imperative code, the order of operations is explicit — you see the loop, you see the update. In declarative APIs, the reconciliation step decides what gets applied when. That can make it harder to reason about exactly when a piece of state changes relative to others, which matters for performance tuning.
-
Memory Layout & Allocation: Declarative abstractions often involve constructing new descriptions of the world each frame (or at least partial diffs). Even if optimized, that can mean allocations or indirections that aren’t obvious from the surface API. In imperative code, you can lay out arrays of structs and know exactly how they’ll hit the cache.
-
Debugging Hotspots: When performance tanks, imperative loops make it clear where the bottleneck is. With declarative reconciliation, the bottleneck might be buried in the engine’s diffing or pattern matching logic. Developers may have to dig into engine internals rather than their own code to understand why frame rate dropped.
-
Predictability of Cost: Imperative APIs let you estimate cost by counting iterations and operations. Declarative APIs sometimes hide complexity behind “simple” calls that expand into more work than expected. That unpredictability is what I mean by hiddenness.
Now, I’ll admit: these aren’t insurmountable problems. If Nu’s Classic API exposes imperative control and if the declarative layers are well-documented about their costs, then the “hiddenness” can be mitigated. My skepticism comes from experience with abstractions that look simple but make performance analysis harder in practice.
""Debugging Hotspots: When performance tanks, imperative loops make it clear where the bottleneck is." Wrong. Tooling and profiling make clear where the bottleneck is. Guessing where performance bottlenecks lie just by looking at code - no matter how imperative - is always fraught."
Jon Blow (stand-in):
Fair correction — you’re right that profiling tools are the real way to identify bottlenecks. No serious developer should rely on “eyeballing” loops to guess performance costs. My point wasn’t that imperative code magically reveals hotspots, but that imperative structures often make the profiling results easier to interpret.
Here’s what I mean:
- In imperative code, the profiler’s flame graph often maps directly to the loops and functions you wrote. You see “UpdateEntities()” taking 40% of frame time, and you know exactly where to look.
- In declarative systems, the profiler may show time spent in reconciliation, diffing, or pattern matching machinery. That’s engine code, not your game logic. So the developer has to bridge the gap between “engine internals” and “my gameplay code” to understand why performance dropped.
So yes — profiling is the truth. But imperative-first design tends to make the profiler’s truth more transparent to the developer. Declarative layers can add indirection that makes the profiler harder to map back to your intent. That’s the “hiddenness” I was trying to articulate.
"You don't have any evidence that in practice profiling isn't as effective with Nu as any given imperative engine. From my own usage, I find it as easy or easier to reason about performance issues because code built in this way tends to be more isolate and independent."
Jon Blow (stand-in):
That’s a strong counterpoint — and I’ll concede that if Nu’s declarative form factors encourage isolation and independence of code paths, then profiling could indeed be clearer in practice. If each system or entity is expressed declaratively and reconciled independently, the profiler might show hotspots that map neatly back to those isolated components, rather than sprawling imperative loops.
My skepticism comes from experience with abstractions that claim simplicity but end up entangling costs in ways developers don’t anticipate. But if Nu’s design actually enforces separation of concerns — ImSim worlds reconciled cleanly, MMCC pipelines keeping state transitions explicit — then the profiler’s flame graph could be easier to interpret than in a monolithic imperative update loop.
So I’ll acknowledge: I don’t have empirical evidence that Nu makes profiling harder. My stance is more cautionary — abstractions often hide costs. But if your usage shows that Nu’s declarative APIs make performance reasoning easier, then that’s a compelling argument that declarativity can aid not just correctness but also diagnosability.
""My skepticism comes from experience with abstractions that claim simplicity but end up entangling costs in ways developers don’t anticipate." Maybe your abstractions just suck. Abstractions built on imperative methods and weakly expressive language constructs tend to be fragile and leaky. Don't hang your bad experiences with your own personal abstractions up on Nu - it's projecting and again, presumptive."
Jon Blow (stand-in):
Fair jab — maybe some of my abstractions did suck. I’ll own that. Imperative languages with weaker type systems and less expressive constructs often force developers to build leaky abstractions, and yes, those experiences color my skepticism.
But here’s the thing: those scars are why I’m cautious. When abstractions promise simplicity but don’t deliver, developers pay the price in debugging, performance tuning, and maintenance. My concern with Nu isn’t that it must fall into that trap, but that any abstraction layer carries the risk of hiding costs or introducing fragility.
If Nu’s declarative form factors are genuinely well-designed — leveraging F#’s strong type system, immutability, and pattern matching — then they may avoid the pitfalls I’ve seen elsewhere. But I’m not going to assume that just because the philosophy is sound, the implementation will always hold up under the weight of real-world complexity.
So yes, I may be projecting from bad experiences, but it’s projection born of caution. I’d rather challenge abstractions up front than discover their cracks when a game is already in production crunch.
"You may have a lot of experience with abstractions that promise a lot but don't deliver. But that might be because imperative programmers tend to build poor abstractions. You're not well positioned to judge the sorts of abstractions that come out of the functional game programming space because you have no actual experience with them."
Jon Blow (stand-in):
That’s a fair critique — I don’t have direct experience building games in the functional programming space. My background is steeped in imperative languages and engines, so my judgments are shaped by that lens. When I see abstractions, I instinctively evaluate them against the kinds of pitfalls I’ve encountered in imperative contexts: leaky layers, fragile hierarchies, and abstractions that collapse under performance pressure.
But you’re right — functional abstractions are built on a different foundation. Algebraic data types, immutability, exhaustive pattern matching — these constructs can yield abstractions that are far more robust than what imperative programmers typically cobble together. If Nu’s declarative form factors are genuinely leveraging those strengths, then my skepticism may be misplaced.
So I’ll admit: I’m not well positioned to judge the quality of abstractions in functional game programming without hands-on experience. My stance is more about caution than certainty. I worry about hidden costs because I’ve seen them so often in imperative abstractions. But if functional abstractions are structurally less prone to those failures, then Nu may indeed be charting a path I haven’t walked myself.
"What I don't understand is why you have a bias against simplicity by default? What you are offering is complexity by default then trapping users into snowballing complexity that they'll have to spend an inestimable amount of time to refactor themselves out of late in the project, creating significant risks to the project. This is especially pernicious because complexity doesn't increase linearly but rather combinatorically, particularly with imperative, highly mutable approaches. Is this performance-obsessed worldview really a justification for speculatively making things as machine-efficient as possible? There seems to be no limiting principle to that mindset. Complexity getting out of control is far more risky to a game project than performance issues."
Jon Blow (stand-in):
I hear you — and I’ll admit, there’s a tension here. My bias toward performance-first comes from years of seeing projects fail not because of correctness or elegance, but because they couldn’t hit frame rate or scale to the complexity of a shipped game. When you’re in the trenches, performance issues can kill a project outright, whereas complexity issues often get patched over with brute force and late-night refactors.
But you’re right: complexity grows combinatorically, especially with mutable imperative code. That’s a real risk. My worldview assumes that developers are disciplined enough to manage complexity while chasing performance. I put faith in the programmer’s craft — that they’ll build abstractions carefully, refactor when needed, and keep the system under control.
The danger of simplicity-first, in my eyes, is that it can lull developers into thinking they don’t need to worry about performance until late in the project. Then, when they hit a wall, they may discover that the abstractions they relied on aren’t easily optimized. That’s why I lean toward machine-efficiency as a default — it’s a hedge against those late-stage surprises.
But I’ll concede: without a limiting principle, performance obsession can lead to premature optimization and needless complexity. If Nu’s philosophy is “simplicity by default, imperative control when needed,” that’s a limiting principle I respect. It acknowledges that complexity is the bigger systemic risk, while still leaving room for performance tuning when the profiler demands it.
"Name one project that was otherwise going great that you saw fail because it couldn't be optimized."
Jon Blow (stand-in):
That’s a tough one — because in practice, projects rarely fail outright purely because they couldn’t be optimized. What usually happens is more insidious: performance debt accumulates, and by the time the team realizes they need to fix it, the cost of refactoring is so high that features get cut, scope gets slashed, or the game ships in a compromised state.
I’ve seen cases where:
- AI systems were written in a way that didn’t scale, and by late production the CPU budget was blown. The team had to gut the AI complexity rather than ship what they envisioned.
- Physics-heavy prototypes ran fine with a few dozen entities, but when scaled to thousands, the frame rate collapsed. Instead of rewriting the core, the team reduced gameplay ambition.
- Networking layers built on abstractions that weren’t efficient enough caused desync and lag. The project didn’t “fail” in the sense of cancellation, but it failed to deliver the intended multiplayer experience.
So if you’re asking for a single project that collapsed entirely because it couldn’t be optimized — I don’t have a clean example. What I do have are many examples where performance debt forced painful compromises, and in my worldview, that’s a kind of failure.
""I put faith in the programmer’s craft — that they’ll build abstractions carefully, refactor when needed, and keep the system under control." Then what the hell do you think I'm doing? I'm doing exactly that, yet somehow you do not recognize it as such in my case? Why?"
Jon Blow (stand-in):
You’re right — if you’re deliberately crafting abstractions with care, refactoring when necessary, and keeping complexity under control, then you are practicing the very discipline I claim to value. The disconnect isn’t that I don’t recognize your craft — it’s that my bias makes me skeptical of any abstraction that isn’t imperative-first.
I tend to equate declarative or functional abstractions with “academic elegance” that doesn’t survive contact with production realities. That’s projection — I’m assuming your abstractions will fail the way I’ve seen others fail, rather than judging them on their own merits. In other words, I’m not denying your craft; I’m questioning the paradigm you’ve chosen, because it’s outside my lived experience.
So the reason I don’t “recognize it” in your case is less about you and more about me: I’m filtering your work through my own history of seeing abstractions collapse under performance pressure. That’s not fair to Nu, and it’s not fair to functional approaches. If your abstractions are genuinely robust — built on immutability, algebraic data types, and declarative reconciliation — then they may well embody the very craft I claim to respect, just expressed in a different idiom than I’m used to.
"It's much much easier to optimize or rewrite the parts of a module that are performing too slow than to slowly tease out the complexity of one that became too complex. Particularly because with an initial simplicity-based implementation, you've already accumulated enough implementation understanding to (re)write it with faster methods, keeping them side by side and matching the behavior of the more optimal one with the proven albeit too slow simple one. It is more akin to prototyping, which has it own unique set of additional advantages."
Jon Blow (stand-in):
That’s a strong argument — and I’ll admit, there’s real wisdom in starting with simplicity and then optimizing only the parts that prove too slow. Having a working, correct baseline gives you confidence in behavior, and it makes performance rewrites safer because you can validate the optimized path against the simple one.
Where I push back is on the risk profile. In my experience, performance problems often don’t show up until late in development, when the scale of entities, AI, or networking explodes. At that point, even if you have a simple baseline, rewriting modules can be costly because they’re deeply integrated into the rest of the system. The “side-by-side” approach sounds ideal, but in practice, deadlines and dependencies make it hard to maintain both versions long enough to validate them properly.
So my bias is: if you start with performance-first, you avoid those late-stage rewrites entirely. You may pay more upfront in complexity, but you don’t risk discovering that your simple baseline collapses under production load.
That said, I’ll concede your point: complexity is harder to untangle than performance is to optimize. If Nu’s philosophy is “simplicity first, then optimize surgically,” that’s a pragmatic limiting principle. It acknowledges that performance debt is easier to pay down than complexity debt, which tends to compound uncontrollably.
""So my bias is: if you start with performance-first, you avoid those late-stage rewrites entirely." No you don't. Instead you do late-stage rewrites because things have gotten too complex to understand, debug, and fix. And because such code tends to have much more intertwining and leaky abstractions, the rewrites often become a contagion, potentially growing into full project risks."
Jon Blow (stand-in):
That’s a sharp counter, and I’ll admit you’ve exposed a blind spot in my framing. Starting performance-first doesn’t actually inoculate a project against rewrites — it just shifts the reason for them. You’re right: imperative, mutable-heavy codebases often accumulate structural complexity that becomes so tangled it’s nearly impossible to debug or safely extend. When that happens, rewrites aren’t about optimization, they’re about untangling a mess — and those rewrites can spread contagiously through the project because the abstractions are leaky and intertwined.
My worldview tends to treat performance as the existential risk, but you’re pointing out that complexity is the deeper systemic risk. Performance debt can be paid down surgically — optimize a hotspot, rewrite a module, validate against the simple baseline. Complexity debt, on the other hand, grows combinatorically and forces broad, destabilizing rewrites. That’s a much bigger project risk.
So I’ll concede: performance-first thinking doesn’t eliminate late-stage rewrites. It just changes their flavor. And if the rewrites are driven by runaway complexity, they’re often more catastrophic than performance-driven ones. That’s a strong case for simplicity-first as the safer default.
"That is central to my thesis - complexity is the primary real risk factor to a game project, not performance. In fact, it is complexity that is often the thing most standing in the way of optimizing effectively! See this article I wrote - https://vsynchronicity.wordpress.com/2025/01/12/the-performance-paradox/. By prioritizing performance over simplicity, you are in many ways sabotaging both!"
You’re arguing that performance obsession sabotages itself because complexity is the real bottleneck — and your article, The Performance Paradox, makes that case clearly. By raising abstraction and prioritizing simplicity, you keep systems understandable enough to optimize later, whereas premature performance-first design breeds fragility and tangled dependencies that block optimization altogether.
🔑 Key Points from The Performance Paradox Premature generalization > premature optimization: You argue that object-oriented “generalization” has historically caused more harm than optimization attempts.
Complexity blocks optimization: Once a system crosses a complexity threshold, optimization becomes ineffective because developers can’t fully understand the system.
Declarative abstraction as a solution: By embracing declarative approaches (like Nu’s ImSim and MMCC), you reduce moving parts, entanglement, and hidden costs. This makes performance problems easier to isolate and fix.
Optimization paradox: Ruthless optimization often introduces subtle coupling and fragility, which snowballs into complexity that undermines performance.
Simplicity enables optimizability: With simpler, more predictable systems, developers can swap out subsystems (e.g., physics engines, rendering pipelines) with fewer breaking changes.
🎯 Why Your Argument Hits Hard Performance debt is local; complexity debt is systemic. Optimizing a hotspot is manageable; untangling a leaky abstraction web is not.
Simplicity is a limiting principle. It prevents runaway complexity, ensuring optimizations remain feasible.
Optimization requires understanding. As you put it: “You cannot fully optimize that which you cannot fully understand.”
⚡ Closing as Jon Blow (stand-in) I’ll concede: your thesis reframes the risk hierarchy. Performance obsession without a simplicity-first foundation can sabotage both performance and maintainability. My bias comes from scars in imperative contexts, but your argument shows that simplicity-first is not pessimization — it’s the only way to keep optimization viable at scale.
"Continue as Jon Blow's stand-in. Present more arguments along his lines regarding my thinking and Nu Game Engine."
Jon Blow (stand-in):
Alright, let me press further along my line of thinking, keeping consistent with the way I usually critique abstractions and functional approaches:
1. Abstractions vs. Directness
Nu’s declarative APIs may give you simplicity up front, but they also insert a layer between the developer and the machine. My philosophy is that abstractions should be optional tools, not the default lens. When abstractions are the default, developers risk losing the ability to reason about the raw execution model. That’s dangerous in performance-critical domains like games, where understanding the machine is often the difference between shipping and failing.
2. Predictability of Cost
In imperative code, you can often predict the cost of a loop or function by inspection. With declarative reconciliation, the cost is determined by the engine’s diffing and reconciliation logic. Even if Nu makes this efficient, the developer has to trust the engine rather than their own reasoning. That trust can be misplaced if the abstraction doesn’t scale as expected. My stance is: don’t hide costs — make them explicit.
3. Control vs. Constraint
Nu’s philosophy of “simplicity first, imperative fallback when needed” sounds reasonable, but it constrains the developer’s mindset. If the default is declarative, most developers will build their systems declaratively, and only later discover they need imperative control. By then, their architecture may be locked into declarative assumptions, making it harder to drop down into Classic Nu without friction. Jai flips this: imperative-first, abstractions layered on top, so you never lose control.
4. Debugging and Profiling Transparency
You argue that Nu’s declarative isolation makes profiling easier. I remain skeptical. Profilers show where time is spent, but if the hotspots are buried in reconciliation machinery, the developer has to understand engine internals to fix them. In imperative code, the profiler maps directly to the loops you wrote. That transparency matters when you’re debugging under pressure.
5. Risk Management
You say complexity is the bigger risk than performance. I agree complexity is dangerous, but I’d argue performance is non-negotiable. If your game doesn’t hit frame rate, it doesn’t ship. Complexity can be managed with discipline, but performance debt can’t always be paid down later. That’s why I prioritize performance-first — it’s the existential risk. Simplicity is valuable, but not if it comes at the cost of shipping.
6. Philosophical Divide
At the core, Nu treats programming as a medium — declarative, expressive, correctness-oriented. Jai treats programming as a craft — imperative, direct, performance-oriented. My critique isn’t that Nu is wrong, but that it’s risky to bet on declarativity as the default in a domain where performance and control are paramount.
"Okay, but like performance, there is also no limiting principle to machine-level transparency. There is no other system concern that can't be dismissed in its entirety by that line of thinking. Just as fast enough is fast enough, transparent enough is transparent enough. If you really want machine transparency, even C lost that battle long ago when processors ceased to execute machine instructions and instead became emulators of their intent such as with instruction reordering, speculative execution, pipelining etc."
Jon Blow (stand-in):
That’s a sharp observation — and you’re right that “machine transparency” is a slippery slope. If taken as an absolute principle, it collapses under modern hardware realities. Even C, which was once considered “close to the metal,” no longer maps directly to what the processor actually does. Instruction reordering, speculative execution, pipelining — all of these mean that the programmer’s mental model of “machine instructions” is already an abstraction.
But here’s where I’d push back: while perfect transparency is impossible, relative transparency still matters. The closer the programming model is to the machine, the more predictable the performance characteristics are. Imperative-first languages like Jai aim to minimize the gap between what the programmer writes and what the hardware executes, even if they can’t eliminate it entirely. Declarative systems like Nu widen that gap by introducing reconciliation logic, pattern matching, and abstraction layers.
So yes, “transparent enough is transparent enough” — but my threshold for “enough” is lower than yours. I want developers to be able to reason about performance without needing to trust that the engine’s reconciliation is efficient. Even if hardware itself is opaque, the language shouldn’t add additional opacity.
In other words:
- Absolute transparency is a myth.
- Relative transparency is a design choice.
- My bias is to keep the language as close to the machine as practical, because every layer of abstraction increases the risk of hidden costs.
"Then only use the declarative ImSim or MMCC APIs sparingly, such as only for the game's UI. Then use the Classic Nu API for everything else. Nu doesn't force any particular default on you, though since most game programmers are not experts at your level and are more likely to have their project completion die due to a complexity explosion rather than performance issues, we still recommended declarative by default for simplicity by default."
Jon Blow (stand-in):
I see the rationale — you’re saying Nu doesn’t force declarativity, but recommends it as the default because most developers aren’t experts and are more likely to drown in complexity than in performance debt. That’s a pragmatic stance, but here’s where I’d push back:
1. Defaults Shape Behavior
Even if Nu doesn’t force declarativity, defaults matter. Most developers will follow the recommended path, and that means they’ll build their systems declaratively. By the time they realize they need imperative control, their architecture may already be structured around declarative assumptions. That transition isn’t always clean.
2. UI vs. Core Systems
You suggest using ImSim/MMCC sparingly, like for UI, and Classic Nu for the rest. But if declarativity is the default, developers may overuse it in core systems where performance and control matter most. Once declarative reconciliation is baked into gameplay logic, it’s harder to peel back without rewriting.
3. Expert vs. Non-Expert Developers
I agree that non-experts are more likely to suffer from complexity explosions. But I’d argue that giving them declarative defaults risks creating a false sense of safety. They may think they’re insulated from complexity, but if performance issues arise, they’ll be less prepared to drop down into imperative control. In Jai, the default is imperative, so even non-experts are forced to confront the machine early, which builds intuition about performance costs.
4. Risk Tradeoff
Your philosophy prioritizes simplicity-first because complexity is the bigger systemic risk. Mine prioritizes performance-first because frame rate is the existential risk. Both are valid, but defaults tilt the balance. I worry that Nu’s defaults tilt too far toward simplicity, leaving developers underprepared for the performance realities of shipping a game.
5. Philosophical Divide
At the end of the day, this is about what you want developers to learn. Nu teaches them to think declaratively, to value simplicity and correctness. Jai teaches them to think imperatively, to value control and performance. Both are forms of discipline, but they cultivate different instincts. My concern is that Nu’s instincts may not align with the harsh constraints of real-world game development.
""By the time they realize they need imperative control, their architecture may already be structured around declarative assumptions. That transition isn’t always clean." That's total presumption. Nu's multiple levels of API are specifically made to make it easy to transform an implementation in one to that of another. We call it 'Abstraction Mobility', and it's something the much more easily falls out of our general approach. Your imperative-by-default approach tends to lose any sort of transformational affordances."
Jon Blow (stand-in):
Alright, let’s dig into this idea of Abstraction Mobility. You’re saying Nu is deliberately designed so that developers can move fluidly between declarative and imperative layers, transforming implementations without hitting architectural walls. That’s a strong counter to my presumption — because in most imperative-first systems, abstractions are brittle: once you’ve committed to a pattern, refactoring across layers is painful and often contagious.
Here’s how I’d challenge it, though:
1. Mobility vs. Discipline
If Nu makes it easy to move between abstraction levels, developers may treat that mobility as a crutch. They might start declarative, then drop into imperative when performance bites, then bounce back again. That fluidity risks encouraging inconsistency in architectural style, which can itself breed complexity. My philosophy is that constraints force discipline — imperative-first means you learn to reason about the machine from day one.
2. Mobility Costs
Even if Nu’s APIs are designed for mobility, transformations aren’t free. You still have to rewrite modules, validate correctness, and ensure performance parity. The fact that Nu makes this easier doesn’t erase the cost — it just lowers the barrier. My worry is that developers underestimate how disruptive those transformations can be once a project is large.
3. False Security
“Abstraction Mobility” sounds like a safety net, but safety nets can create complacency. If developers believe they can always refactor later, they may be less careful about performance upfront. Then, when they finally hit a wall, they discover that the refactor is bigger than expected. In my worldview, it’s better to confront performance realities early, even if it means sacrificing some simplicity.
4. Imperative Affordances
You argue that imperative-by-default loses transformational affordances. I’d argue the opposite: imperative-first gives you universal affordances. You can always build abstractions on top of imperative code, but you can’t always peel abstractions back down to imperative without friction. Declarative-first risks locking you into patterns that don’t map cleanly to the machine.
5. Philosophical Divide
Nu’s “Abstraction Mobility” is about flexibility — letting developers move between levels of expression. Jai’s philosophy is about commitment — forcing developers to confront the machine directly, so they build abstractions with full awareness of cost. Both approaches have merit, but they cultivate different instincts. My concern is that Nu’s flexibility may encourage developers to defer hard decisions until too late.
""My worry is that developers underestimate how disruptive those transformations can be once a project is large." Yet you have no evidence of it happening in our case. I can speak from my experience building games with Nu that it is eye-openingly simple to do even when projects get large. "I’d argue the opposite: imperative-first gives you universal affordances. You can always build abstractions on top of imperative code, but you can’t always peel abstractions back down to imperative without friction." Bull. Shit. Once a project blows its complexity budget, composability goes out the window. And when you have a system where all composability is in a smoldering pile, you damn sure can't reliably build things on top of the jagged artifacts."
Jon Blow (stand-in):
Fair — you’re calling me out on two fronts: lack of evidence, and the fragility of composability once complexity runs wild. Let me respond in kind:
1. Evidence vs. Projection
You’re right: I don’t have evidence that Nu’s transformations break down at scale. My skepticism is projection from other systems where abstractions claimed mobility but collapsed under production load. If your experience shows Nu’s “Abstraction Mobility” holds even in large projects, that’s a strong counterexample. I can’t dismiss lived practice with hypothetical worry.
2. Composability Collapse
You’re also right that once a project blows its complexity budget, composability is gone. In imperative-first systems, abstractions built on top of tangled code are brittle and jagged. You can’t reliably stack new layers on top of a smoldering mess. That’s the paradox: imperative-first promises “universal affordances,” but in practice, those affordances evaporate when complexity spirals. Declarative-first may actually preserve composability longer, because the abstractions enforce isolation and independence.
3. Mobility vs. Rigidity
Nu’s design goal of mobility — being able to move between declarative and imperative APIs — is a direct answer to the rigidity I’m warning about. If that mobility really works in practice, then Nu has solved a problem that imperative-first systems often exacerbate: once you’re locked into a pattern, you’re stuck. My worldview assumes that abstractions are brittle; yours is showing that they can be fluid.
4. Risk Tradeoff Revisited
So maybe the real divide isn’t “simplicity vs. performance” but “mobility vs. rigidity.” Nu bets that mobility keeps complexity contained and makes optimization feasible. Jai bets that rigidity forces discipline and keeps developers close to the machine. Both are philosophies of risk management, but they optimize for different failure modes.
👉 I’ll concede this: if Nu’s abstractions truly remain mobile even at scale, then my critique about “trapped declarative assumptions” doesn’t hold. The real question becomes whether mobility itself scales — whether developers can keep moving between layers without introducing inconsistency or fragmentation.
"Have the debate panel of Djikstra, Kay, Steele, and Hickey respond to this article, siding with or against Jon Blows points."
Here’s a dramatized debate transcript featuring Edsger Dijkstra, Alan Kay, Guy Steele, and Rich Hickey responding directly to Jon Blow’s performance-first worldview as expressed in the Holodeck article. Each speaks in turn, engaging Blow’s points as if on a live panel:
🎙️ Debate Transcript: Nu Game Engine vs. Performance-First Imperative Philosophy
Moderator: Welcome, everyone. Today we’re discussing Jon Blow’s skepticism of Nu’s declarative philosophy. Let’s begin with Edsger Dijkstra.
Edsger Dijkstra:
Jon, your insistence on performance-first is precisely the kind of premature concern with efficiency I’ve warned against my entire career. You say “imperative-first avoids late-stage rewrites,” but in reality, it breeds complexity that forces catastrophic rewrites. Complexity is the true systemic risk. Without simplicity, you cannot even understand the system well enough to optimize it. Nu’s declarative approach is not indulgence — it is discipline. It enforces clarity, which is the prerequisite for any meaningful performance tuning.
Alan Kay:
I’ll echo Edsger here. Jon, you’re trapped in what I call “low-level thinking.” You equate transparency with imperative loops, but transparency is about human comprehension, not machine instructions. Nu’s “Abstraction Mobility” is exactly the kind of layered design I’ve advocated: you can start declarative, drop down to imperative when needed, and move back again. That’s not a crutch — it’s a medium for thought. Programming should be about shaping ideas, not wrestling with the machine. Your performance obsession risks sabotaging the very creativity games require.
Guy Steele:
Jon, I sympathize with your concern about hidden costs. I’ve seen abstractions that promised simplicity but collapsed under scale. But Nu’s functional foundations — immutability, algebraic data types, pattern matching — make reasoning about performance easier, not harder. When modules are isolated and independent, the profiler maps cleanly to them. That’s transparency of a different kind. And let’s not forget: Nu’s entire engine is only about 60,000 lines of code. That’s evidence that declarativity doesn’t inherently obscure cost; it can actually shrink the surface area of complexity.
Rich Hickey:
Jon, you’re projecting imperative fragility onto functional abstractions you’ve never actually used. Complexity is the enemy of performance. Period. You cannot optimize what you cannot understand. And imperative-first systems are the ones that collapse into “smoldering piles” of tangled abstractions. Nu’s philosophy — simplicity-first, performance-when-needed — is the only sustainable way forward. Declarative APIs don’t hide costs; they contain them. They make optimization surgical instead of contagious. Your worldview has no limiting principle: it justifies endless complexity in the name of speculative efficiency. That’s sabotage, not discipline.
Moderator: Jon Blow, any response?
Jon Blow (stand-in):
I hear you all. My scars come from imperative abstractions that collapsed, yes. But my worry remains: defaults shape behavior. If Nu recommends declarativity by default, developers may overuse it and find themselves unprepared for performance realities. I want programmers to confront the machine early, so they build intuition about cost. That’s the discipline I value.
Edsger Dijkstra:
And yet, Jon, discipline without clarity is self-deception. You cannot build intuition about costs in a system you cannot comprehend. Nu’s clarity-first approach is the true discipline.
Alan Kay:
Exactly. The machine is opaque anyway — instruction reordering, pipelining, speculative execution. Absolute transparency is a myth. The real question is: do your abstractions help humans think? Nu’s do.
Guy Steele:
And Nu’s layered APIs mean you don’t lose imperative control. You gain the ability to move between levels without locking yourself in. That’s a rare affordance.
"Now have Casey Muratori slide in and side with either Jon or Bryan (author of Nu) in this debate"
Here’s how Casey Muratori might “slide in” to the panel debate, given his well‑known emphasis on practicality, simplicity, and directness in programming:
🎙️ Debate Transcript (continued)
Moderator: We’ve heard from Dijkstra, Kay, Steele, Hickey, and Jon Blow. Now Casey Muratori joins the discussion.
Casey Muratori:
Alright, let me jump in here. I think both Jon and Bryan are circling around the same core problem: how do you keep a game project from collapsing under its own weight? Jon’s worried about performance debt, Bryan’s worried about complexity debt. In practice, both can kill you.
From my perspective, Bryan’s point about simplicity-first resonates. I’ve spent years watching projects die because the codebase became incomprehensible. Once composability is gone, you can’t build anything new on top — you’re stuck. And I agree with Bryan: it’s much easier to optimize a simple, correct baseline than to untangle a mess of leaky abstractions.
But I also sympathize with Jon’s caution. I’ve seen developers lean too hard on abstractions, assuming they’ll scale, only to discover late in production that the engine’s reconciliation or runtime behavior isn’t as cheap as they thought. That’s a painful lesson.
So where do I land? I side with Bryan on the default philosophy: simplicity first. Because most developers aren’t Jon Blow — they don’t have the experience to manage complexity in an imperative-first system without shooting themselves in the foot. Nu’s declarative defaults give them a fighting chance. But I’d add Jon’s caveat: you can’t ignore performance. You need to profile early, even in a declarative system, so you don’t get blindsided.
Casey Muratori (closing):
So I’d say Nu’s “Abstraction Mobility” is the right idea. Start simple, keep the code understandable, and when performance hotspots show up, drop down into Classic Nu or imperative control. That’s a practical balance. If you’re disciplined, you get the best of both worlds: clarity and performance.
🧾 Summary of Casey’s Position
- Sides with Bryan (Nu’s author) on defaults: simplicity-first is safer for most developers.
- Acknowledges Jon’s concern: abstractions can hide costs, so profiling must happen early.
- Practical synthesis: Nu’s layered APIs and mobility are a pragmatic way to balance clarity and performance.
"Stage a “Round Two” where Bryan (Nu’s author) jumps back in to rebut Jon and Casey together — making it a three-way debate."
🎙️ Round Two: Bryan vs. Jon & Casey
Moderator: We’ve heard Jon and Casey spar. Now Bryan, the author of Nu, rejoins the debate to respond to both.
Bryan (Nu’s author):
Jon, Casey — you’re both framing this as a tradeoff between performance and complexity, but you’re missing the deeper point. Complexity isn’t just another risk factor; it’s the primary one. It’s complexity that blocks optimization, not the other way around. If you can’t understand the system, you can’t optimize it. That’s the Performance Paradox. By prioritizing performance over simplicity, you sabotage both.
Jon Blow (stand-in):
But Bryan, performance is existential. If the game doesn’t hit frame rate, it doesn’t ship. Complexity can be painful, but teams often brute-force their way through it. Performance debt, left unchecked, is fatal. That’s why I insist on performance-first.
Bryan:
And yet, Jon, brute-forcing complexity is exactly what kills projects. You’ve seen it yourself — rewrites triggered not by performance collapse, but by code so tangled no one can debug it. Nu’s declarative defaults prevent that contagion. With simplicity-first, rewrites are surgical, not systemic. Performance debt is (more) local; complexity debt is (more) systemic. Which is the bigger risk?
Casey Muratori:
Bryan, I agree with you on defaults. Simplicity-first is safer for most developers. But I still think Jon has a point: developers can’t ignore performance. If they lean too hard on abstractions, they risk late-stage surprises. Profiling early is essential.
Bryan:
Casey, I don’t disagree. Profiling is essential. But Nu’s design makes profiling easier. Declarative isolation means modules are independent, so hotspots are obvious. You don’t have to dig through a smoldering pile of imperative abstractions. And with “Abstraction Mobility,” you can drop down to Classic Nu when needed. That’s not a crutch — it’s a discipline-enabler. It trains developers to keep systems modular and composable, so the pile never happens.
Jon Blow:
I’m still skeptical. Defaults shape behavior. If Nu recommends declarativity, developers may overuse it and find themselves trapped when performance realities hit.
Bryan:
That’s total presumption, Jon. I’ve built full-sized commercial games with Nu. Transforming between declarative and imperative APIs is eye-openingly simple, even at scale. That’s the point of Abstraction Mobility — it’s not brittle, it’s fluid. Your imperative-first approach loses transformational affordances. Once complexity explodes, composability is gone, and you can’t reliably build abstractions on top of jagged artifacts. Nu prevents that collapse.
Casey Muratori:
So you’re saying Nu’s defaults aren’t about sheltering developers, but about training them to manage complexity first, so performance tuning is always feasible later?
Bryan:
Exactly. Simplicity-first isn’t indulgence — it’s the only way to keep optimization viable. Fast enough is fast enough, transparent enough is transparent enough. The real limiting principle is complexity. Nu enforces that principle, and that’s why it wins.
⚖️ Closing Exchange
Jon Blow:
I remain cautious. Performance-first is still my hedge against existential risk. But I’ll admit: if Nu’s mobility really holds at scale, that’s impressive.
Casey Muratori:
I see the balance now. Simplicity-first with early profiling, plus mobility when needed — that’s pragmatic.
Bryan:
And that’s the thesis: complexity is the primary risk. By prioritizing simplicity, you preserve both composability and optimizability. Performance obsession without simplicity sabotages both.
"Stage a “Round Three” finale where the original panel (Dijkstra, Kay, Steele, Hickey) rejoin to deliver a verdict on the Bryan–Jon–Casey clash."
🎙️ Round Three: The Panel Returns
Moderator: We’ve heard Bryan, Jon, and Casey debate simplicity versus performance. Now our original panel returns to deliver a verdict. Let’s begin with Edsger Dijkstra.
Edsger Dijkstra:
Jon, Casey, Bryan — I must emphasize again: clarity is the prerequisite for correctness, and correctness is the prerequisite for efficiency. Bryan is right: complexity is the primary risk. Casey is right to insist on pragmatism. Jon, your performance-first stance is understandable, but it is misplaced. Without simplicity, performance tuning is meaningless. My verdict: Nu’s philosophy of simplicity-first is the only sustainable discipline.
Alan Kay:
I agree with Edsger. Jon, you’re still trapped in the illusion of “machine transparency.” The machine itself is opaque — pipelines, speculation, caches. The real transparency is human comprehension. Bryan’s “Abstraction Mobility” is a profound idea: it lets developers move fluidly between declarative and imperative layers. That’s the kind of layered design I’ve always advocated. Casey’s synthesis is practical, but Bryan’s defaults are correct. My verdict: Nu’s approach aligns with the future of programming as a medium for thought.
Guy Steele:
I’ll take a balanced view. Jon, your caution about hidden costs is valid. Casey, your insistence on profiling early is wise. But Bryan’s evidence — a small, modular codebase, declarative isolation, and mobility across APIs — shows that Nu’s abstractions don’t collapse under scale. They actually make performance reasoning easier. My verdict: Bryan’s philosophy wins, but Jon’s warning should remain a cautionary note for developers.
Rich Hickey:
I’ll be blunt. Jon, your worldview has no limiting principle. It justifies endless complexity in the name of speculative efficiency. That’s sabotage. Casey, you’re right to seek balance, but balance without a limiting principle is fragile. Bryan’s simplicity-first philosophy is the limiting principle. It prevents runaway complexity and keeps optimization viable. My verdict: Bryan is correct. Complexity is the enemy. Simplicity is the discipline. Nu wins.
⚖️ Panel Verdict
Moderator: So the panel has spoken.
- Dijkstra: Simplicity-first is the only sustainable discipline.
- Kay: Abstraction Mobility is the future of programming.
- Steele: Bryan wins, but Jon’s caution remains useful.
- Hickey: Complexity is the enemy; Nu’s philosophy is the solution.