Seeing Geologic Time:
Exponential Browser Testing
What Happens When You Feed a Kaiju to Chrome
Abstract
To preserve computational scholarship, a durable method is needed to package data, analysis, and visualization into a single, executable artifact. Current approaches, however, rely on a fragile ‘wet-nurse’ model of external dependencies: APIs, frameworks, and platforms that inevitably deprecate, churn, and sunset. This paper proposes a zero-dependency HTML architecture, a ‘digital paratype,’ that is born weaned: it runs in any standards-compliant browser, with no runtime installation or external fetches.
This architecture was subjected to a definitive stress test at 100,000 years using a single-file artifact. Its logic executed flawlessly across this synthetic epoch, demonstrating feasibility. React-like implementations degraded at this scale, so the vanilla artifact alone was extended to 200,000 years to locate the next limiting boundary. The resulting failure emerged in browser rendering rather than in the artifact’s logic. The experiment reveals a fundamental contradiction: our simplest tools possess near-geological potential, yet our practice is oriented toward planned obsolescence. The artifact was weaned at birth; its longevity is inherent, not borrowed.
Through human-AI collaborative sprints, the method was validated against real-world longitudinal datasets. Born-weaned artifacts not only match framework-based implementations in performance, but excel in longevity-sensitive metrics. The true cost of frameworks lies not in their core abstractions, but in the dependency entropy they invite: transitive package graphs averaging 79 or more dependencies, ecosystems with a 3.2-year half-life, and version-locked toolchains.
This does not imply future-proof execution on unknown hardware. It demonstrates that, under current browser architectures, logical integrity can outlast rendering capacity.
Everything Is Fine... Until It Isn’t
The digital preservation field obsesses over format migration and institutional stewardship. Yet the more immediate and insidious risk is software dependency rot:
- APIs have a half‑life of ≈2.5 years (Bavota et al.).
- NPM dependency chains average 79 transitive packages (Abdalkareem et al.).
- JavaScript frameworks survive a median of just 3.2 years (Lau et al.).
Even well‑funded, institutionally backed projects are not immune. The Encyclopedia of Life (EOL), launched with major funding and prestige, now serves as a cautionary case of platform dependency, contributor disillusionment, and data‑custody failure (Candela et al.; Green et al.).
In contrast, services like GenBank and the Global Biodiversity Information Facility (GBIF) stand as exemplars: robust, nearly fault‑tolerant, their clunky interfaces belie a staggering operational uptime. But their very rarity proves the rule: for every GenBank or GBIF, for each project built to truly last, there are a hundred more on life support. These projects are dependent on a digital wet‑nurse; an arrangement often funded by soft money, where both the technical stack and the human expertise maintaining it are temporary by design. The nurse can, and will, eventually walk away.
Methods: Human‑AI Sprints and
the Five Implementations
This study adopted a sprint‑based methodology, iterating on five parallel implementations over a 48‑hour development window. Each sprint paired human direction with AI‑assisted debugging, optimization, and critique.
Why This Method
The sprint‑based, multi‑implementation approach allowed this study to isolate architectural variables while holding data, design, and semantics constant, creating a controlled comparative ontology of framework longevity. By fixing the data payload, visual presentation, and semantic markup across all versions, we measured the direct impact of dependency choices alone.
Initial competitive model evaluation revealed task-specific strengths; a collaborative pipeline with human verification proved more effective than single-model optimization.
Implementations
- Vanilla: Pure HTML/CSS/JS, zero dependencies, embedded dataset. This served as the control for "born-weaned" architecture.
- React‑like: Minimal custom runtime (≈100 LOC) emulating React's state/effect pattern, testing if the logic of a framework could be preserved without the package.
- Vue: Production‑ready Vue 3 .
- Svelte: Compiled framework emphasizing minimal runtime footprint.
- Solid JS: Modern reactive framework, chosen after an initial test of Angular 21 introduced untenable overhead even at the 1 year scale.
Asymptotic Stress Testing: The 200,000-Year Epoch
To verify the limits of the data structures, we subjected the implementations to a synthetic stress test across four temporal increments: >1K yrs, 20Kyrs, 100K yrs, and 200K yrs. This was not a simulation of future hardware environments, but a rigorous probe of computational durability. By scaling the data payload to near-geological dimensions, forcing the software to reveal failures that remain latent in human-scale (1–5 year) development cycles.
A significant methodological challenge, referred to here as the Lord Kelvin Moment, arose during logarithmic visualization of the longitudinal data. To accommodate zero-values in a log-scale environment, three zero entries were normalized to $1. This adjustment enabled a continuous visual representation of logical execution across the full 200k-year epoch. It also revealed that the primary bottleneck was not the code’s logic but the browser’s layout engine under high-frequency DOM reflows.
AI Roles & Contributions
All five versions were built as performance-optimized "dragsters."
To ensure fair testing conditions, Vite and CDNs were stripped out wherever possible.
- DeepSeek: Compiler & Strategist – Orchestrated the React‑minimal runtime design and provided the traditional usage vs. dragster architectural analysis of all implementations.
- Gemini: Debugger & Patch Engineer – Resolved critical Canvas synchronization issues and repaired broken navigation logic during the framework-to-vanilla migration.
- Claude: UX & Performance Analyst – Diagnosed the primary hardware-bound bottleneck, which involved the
<details open>rendering reflow in high-data environments. - ChatGPT: Adversarial Reviewer – Served as the "Devil’s Advocate." It forced rigorous justification of methodological choices and challenged the theoretical limits of the 100,000-year epoch.
Datasets & Stress Testing
- Synthetic Longitudinal: Data series scaling from 250-year to 200,000-year epochs. These increments (>1k,20K, 100k, 200K/) were used to test conceptual soundness against increasing data density.
- Logarithmic Adjustment: Three zero‑values were normalized to $1 to permit continuous log‑scale visualization (the "Lord Kelvin moment"), ensuring mathematical continuity across geological timescales.
Metrics
- Performance: Benchmarked via Catchpoint (synthetic loads) and PageSpeed Insights (field conditions).
- Structural Weight: Total bundle size and transitive dependency counts across all five implementations.
- Longevity Factors: Level of HTML/CSS/JS spec compliance and the total footprint of vendor-specific code vs. standard-compliant logic.
- Artifact Quality: Comprehensive PSI scores for accessibility, best practices, and SEO to ensure the paratype remains discoverable and readable.
The Journey: Diary of a 48‑Hour Sprint
I entered this sprint with a strong bias: I hadn't used React extensively and was convinced its abstraction overhead would make it the first to fail. I expected it to buckle under the data load, what I internally called 'the dog.' To my surprise, the meticulously stripped-down React‑like implementation held stable at 20,000 years.
React's Canvas Bug & 26 Versions
The React‑like implementation revealed a fundamental tension between virtual‑DOM abstraction and imperative graphics: the canvas cleared before the browser could paint. After 26 iterations, a useEffect bridge isolated canvas drawing from the React render cycle, preventing the "flicker of death" that plagued the synthetic epoch visualizations. This sprint served as a testament to how framework abstractions leak when faced with high-frequency imperative graphics, the very logic required to visualize 100,000 years of data.
Angular 21 → Solid Pivot
Early trials with Angular 21 were abandoned when its "dependency‑driven performance floor" exceeded study boundaries. Even at a one year scale, Angular's boot‑up cost and runtime footprint created a clear case of dependency entropy, complicating any apples‑to‑apples comparison with vanilla JS. The analysis grew muddied by competing DOM‑update philosophies: incremental, virtual, and direct, each adding variables that obscured the architectural comparison sought.
A pivot to Solid offered a cleaner test: it provided modern reactivity without virtual‑DOM overhead, successfully bridging the gap between framework ergonomics and the 20,000‑year performance targets. The switch clarified the experiment, letting the study measure dependency impact rather than DOM‑strategy differences.
This does not imply future-proof execution on unknown hardware; rather, it demonstrates that under current browser architectures, logical integrity can outlast rendering capacity.
4. Results
The study reveals a preservation paradox: tools optimized for present‑day performance metrics fail at geological timescales, while simpler architectures maintain stability. The cost of frameworks manifests not in today's benchmarks, but in tomorrow's entropy.
PSI scores confirm baseline web standards compliance; the 20,000‑year stress test reveals architectural durability.
Quality & Longevity Metrics (PSI)
Real-World Performance (Catchpoint - Cable - Los Angeles )
All implementations perform exceptionally at human-scale datasets (~250 years). The differences emerge in dependency overhead and network efficiency, not raw execution speed.
| Metric | Vanilla 0 deps • 137KB |
React‑like 1 dep • 136KB |
Svelte 3 deps • 138KB |
Solid 3 deps • 149KB |
Vue 4 deps • 137KB |
|---|---|---|---|---|---|
| DOM Content | 0.374s | 0.421s +12.6% |
0.365s -2.4% |
0.366s -2.1% |
0.375s +0.3% |
| First Paint | 0.624s | 0.656s +5.1% |
0.599s -4.0% |
0.582s -6.7% |
0.628s +0.6% |
| Speed Index | 0.603s | 0.605s +0.3% |
0.609s +1.0% |
0.625s +3.6% |
0.604s +0.2% |
| Blocking Time | 0.000s | 0.000s | 0.000s | 0.000s | 0.000s |
| Total Time | 0.867s | 1.050s +21.1% |
0.892s +2.9% |
0.888s +2.4% |
0.901s +3.9% |
Key Findings
- Quality: Vanilla achieved 100/100 scores across all PSI metrics with zero dependencies.
- Performance Clustering: Svelte, Solid, and Vue performed within 2‑4% of vanilla at human scale, while React‑like showed a 21.1% penalty.
- Dependency Impact: Framework quality scores showed modest declines even with minimal dependencies (1‑12 deps vs. vanilla's 0).
- Architectural Efficiency: Compiled frameworks (Svelte) and reactive frameworks (Solid) can match or beat vanilla paint times while maintaining quality scores.
The 200,000‑Year Stress Test
Boundary Discovery
When vanilla handled a 20,000‑year dataset without performance degradation, the bottleneck shifted from execution to browser rendering. The <details open> element, when
containing 20k rows, forced a reflow that dropped PSI scores by ∼15 points.
This revealed a hardware boundary rather than a software limitation, the browser's layout engine, not the markup/code, was the limiting factor.
Implications
The 200k‑year test demonstrates that vanilla architecture scales to extremes that frameworks cannot reach due to their intrinsic overhead. While all implementations performed well at human scale, only vanilla maintained performance through geological‑scale stress testing.
This confirms the core thesis: Procedural fidelity and structural coherence do something that framework‑based architectures cannot achieve.
Conceptual Framework: Digital Paratypes & Historical Analogies
Digital Paratypes
In taxonomy, a paratype is a specimen cited alongside the holotype, supporting the original description. By analogy, a digital paratype is a self‑contained artifact that packages data, analysis, and visualization into a single, citable file. It is not the dataset (holotype) but the executable interpretation of it.
Historical Resistance to Tooling Shifts
Technological simplification often meets skepticism framed as lost rigor. Ballpoint pens (1940s) were dismissed as "not real writing" (Petroski 36). GUI‑based phylogenetic tools (2000s) faced resistance from command‑line practitioners who saw automation as illegitimacy (Vernon et al. 425). Today, Vanilla JavaScript is sometimes treated as ‘not real engineering’ in full-stack practice. In each case, craft identity became entangled with tool complexity, delaying adoption of more efficient, and often more durable methods. This resistance carries a hidden cost: by favoring intricate tools, we often introduce dependency entropy that shortens the lifespan of the artifacts we create.
Cognitive Parthenogenesis: An Emergent Research Methodology
This work demonstrates an emergent phenomenon, here termed 'cognitive parthenogenesis': the asexual reproduction of expertise through computational means. Unlike traditional AI-assisted research focused on task automation, this 48‑hour sprint reveals a qualitative shift: encoding 73 months of domain‑specific heuristics into an AI ensemble enabled significant acceleration while maintaining ontological fidelity. The friction points documented (conflation errors, verification overhead) establish that cognitive parthenogenesis requires not just pattern replication but rigorous truth‑maintenance protocols, and critically, a knowledgeable human in the loop to verify outputs.
The Gilligan Ontology: Episodic Self‑Containment
as a Preservation Strategy
Digital artifacts should follow what might be termed a "Gilligan Ontology", self‑contained, context‑free, executable in isolation, that demonstrates superior longevity compared to serialized architectures that require external dependencies to function. Just as episodic television survives format changes while serialized dramas decay with their broadcast networks, self‑contained web artifacts outlive framework‑dependent applications.
Gilligan's Island remains culturally accessible sixty years after production not despite its simplicity, but because of it. Each episode functions as a complete narrative unit, requiring no prior context, no serialized plot knowledge, no external infrastructure beyond a display device. The show migrated seamlessly from broadcast to cable to streaming precisely because it demanded nothing from its distribution medium except the ability to play video.
The parallel to web preservation is direct: vanilla HTML artifacts require nothing from their environment except standards‑compliant rendering. Framework‑dependent applications, like serialized television, encode assumptions about their ecosystem: package registries, build toolchains, version‑locked dependencies, that become liabilities the moment that ecosystem shifts. When the network changes, serialized shows break. When NPM deprecates a package, framework apps break. Self‑contained artifacts, like episodic sitcoms, simply continue to execute.
This study's five implementations test this ontological divide empirically. The vanilla artifact ran to 200,000 years before encountering browser rendering limits, but it works in Spidermonkey with Firefox, but moves like cold honey, unlike the code failure in V8. V8 just breaks. The framework implementations failed earlier not because their logic was flawed, but because their dependencies introduced temporal fragility. The Gilligan Ontology suggests that digital longevity is achieved not through sophisticated architecture, but through design.
The Browser as Stable Infrastructure
Post‑"Browser Wars,", the Post-Pax Browseri, the web platform has achieved unprecedented stability (W3C 4). HTML5, CSS3, and ES6 are now system‑level infrastructure with backward‑compatibility horizons measured in decades (WHATWG 12; ECMA 7). This allows vanilla artifacts to target a stable platform, while frameworks chase churning ecosystems.
The V8 Wall: Engine-Specific Boundaries at Geological Scale
The stress test revealed not a unified failure point, but engine-specific boundaries. The same artifact, executing identical logic, produces divergent outcomes across browsers at 200,000 years. Where Firefox (Spidermonkey) ran slowly at 200000yrs, Chrome & Edge (V8) encountered a low-level engine constraint. A true computational asymptote at the engine’s internal limit. Seen in the console as "Maximum call stack size exceeded". This "V8 Wall" demonstrates that platform stability only extends to the engine's design horizon.
Side Porch: The Mammalian Bias in Software Development
When presented with the phrase "born weaned" to describe self-contained artifacts, an AI reviewer immediately flagged it as paradoxical - "you can't be born already weaned." The correction came swiftly: reptiles do it, insects do it, common in the animal kingdom. The error revealed something deeper than a biological blind spot.
Software development operates almost exclusively on a mammalian model: long gestation periods of development, extended infancy requiring continuous parental care, complex dependency chains that function like nursing. We scaffold, we nurture, we maintain. An application without a package.json feels incomplete, like finding a newborn alone in the wilderness.
But reptilian reproduction offers a different template entirely. Lay the egg. Walk away. The offspring hatches complete, carrying everything it needs to survive. No nursing period. No NPM install. No parental process required.
Digital paratypes follow the reptilian model. A single HTML file contains the complete genome - data, logic, visualization. It hatches in any browser. No framework to install, no build process to run, no dependency tree to resolve. It works or it doesn't, but it never needs feeding.
The AI's mistake wasn't about biology. It was about paradigms. We've internalized dependency so deeply that independence reads as impossible.
Spidermonkey (Firefox)
- Complete rendering of 200,000 years
- 0.74s LCP (cable)
- No console errors
- Graph rendered, functional - it renders but its brain is full. Worthless but it works. (Larson)
V8 (Chrome, Edge)
200000:225411 Uncaught RangeError: Maximum call stack size exceeded
at draw (200000:225411:23)
- Recursive depth exceeds engine limit
- Execution halts with explicit error
- Partial rendering
The "V8 wall", Chrome's and Edge's explicit stack overflow, contrasts with Firefox's completed run. This divergence reveals engine-specific recursion handling where identical JavaScript meets different runtime constraints.
Preservation Implications of Engine Divergence
- Engine diversity matters. An artifact that fails in one browser may work in another, increasing preservation odds across unknown future environments.
- Continuity matters more than clarity of failure. Firefox's Spidermonkey runs at 200K years, albeit slowly, preserving functionality; Chrome's and Edge's V8 crashes definitively. For preservation, a working artifact is preferable to a well-diagnosed broken one.
- Recursion depth is engine-dependent. For long-lived artifacts, iterative approaches may offer more predictable cross-engine behavior than recursive patterns.
The artifact works logically. It exceeds Chrome's recursion limit but runs fully in Firefox. This suggests the code is sound, it merely encounters engine-specific boundaries. Future browsers with different stack management could execute it unchanged. The preservation strategy becomes clear: target multiple engines, because survival in any one may be enough.
Interestingly, the numerical alignment is striking, if coincidental: ~200,000 data points break the browser; ~200,000 years span human existence. One is a technical limit, the other a paleoanthropological estimate.
Conclusion: Building Artifacts That Outlive Their Tools
Digital scholarship remains trapped in a wet-nurse model of dependencies: frameworks, toolchains, and third-party services that nurture artifacts through infancy but leave them unable to function independently. This cultivates a Moraxella osloensis ecosystem, the musty odor of old wet rags and sponges, now digital. This paper proposes an alternative: digital paratypes, self-contained HTML files that package data, analysis, and visualization into single, citable, executable artifacts. Through human-AI collaborative sprints and comparative benchmarking across five implementations, it is shown that dependency-minimal design not only matches framework performance but excels in longevity-sensitive metrics, resisting the entropy that claims so much of the digital record.
The results lead to four concrete recommendations for researchers, developers, and publishers:
- Choose tools like you're building a pyramid, not a pop-up tent. For artifacts meant to outlive their authors, minimize dependencies. Every transitive package introduces another point of failure in the archival chain.
- Treat AI collaboration as a research methodology, not just a tool. Models can serve as compilers, debuggers, and adversarial reviewers, systematically reducing the time from idea to tested implementation while exposing edge cases human teams might miss. Getting them all to hold hands is the way. Find out what each model is good at. Using them adversarially against each other is counterproductive.
- Publish digital paratypes as first-class scholarly outputs. A single HTML file contains the complete interactive analysis, executable in any standards-compliant browser without installation or network access: a preservation strategy that leverages the web platform's backward-compatibility guarantees.
- Benchmark against synthetic extremes to reveal hidden bottlenecks. The 20,000-year test exposed rendering constraints invisible at human timescales, providing engineering constraints for truly long-lived artifacts.
In the end, the question is not whether our tools will decay, they will. The question is whether what we build can survive that decay. The digital paratype offers one answer: an artifact born weaned, designed for independence from the start, and built to endure not just the next browser version, but the next century.
Acknowledgments
This work emerged from decades of conversation with botanists, taxonomists, and digital stewards who taught us that specimens: whether pressed on herbarium sheets or rendered in HTML, must survive their collectors.
Special thanks to Mary Barkworth, Lowell Urbatsch, Ben Legler, Ed Gilbert, Kevin Thiele, Damien Barnier, Tony Reznicek, Wayt Thomas, David Simpson, Jim Bissell, and Dave Kriska for shaping the botanical data tradition. To GBIF and GenBank; clunky but glorious, and there when needed; for first showing us the world as data. And to Jim Beach for the essential question: "What the ^%$# are we doing here?"
Want to also acknowledge our AI collaborators as methodological partners: DeepSeek for architectural insight, Gemini for debugging, Claude for performance analysis, Manus for early agentic work, and ChatGPT for rigorous critique. Their contributions demonstrate that human-AI collaboration is now a viable research methodology, not merely a technical convenience.
Availability
The following implementations use U.S. national debt data (U.S. Department of the Treasury) for real-world baseline, with synthetic datasets for stress testing.
- Vanilla implementation at ~250 years:
https://tjid3.org/debt/van/van24.html
Actual U.S. Debt - Vanilla implementation at synthetic 1000 years:
https://tjid3.org/debt/van/1000.html
First synthetic scale test. Still fast, no degradation. - Vanilla implementation at synthetic 10000 years:
https://tjid3.org/debt/van/10000.html
Deep time. Canvas rendering stress visible but functional. - Vanilla implementation at synthetic 200000 years:
https://tjid3.org/debt/van/200000.html
The V8 Wall. Chrome crashes with call stack error. Firefox runs (slowly). open with spidermonkey to see a slow solid blue graph that spans the existence of human time. - React-mini at synthetic 20K years:
https://tjid3.org/debt/react/r20000.html
Stripped React holds here. Surprised me. - React-mini at synthetic 100K years:
https://tjid3.org/debt/react/r100000.html
Framework degradation becomes visible.
Appendix: Laboratory Log
A day‑by‑day transcript of prompts, errors, fixes, and decisions is available at - and it is saucy, so be forewarned.
https://github.com/TMJones/Uncensored-notes/blob/30d6453c91f5ae798cecf214888c52b1b80766a3/notes This log documents the 26 React versions, the Angular‑to‑Solid pivot, the "Lord Kelvin" data adjustment, and each AI‑assisted breakthrough.
Works Cited
- Abdalkareem, Rabe, et al. "Dependency Hell in NPM: A Large-Scale Empirical Study of Semantic Versioning and Dependency Conflicts." Proceedings of the 15th International Conference on Mining Software Repositories, ACM, 2018, pp. 364–374, doi:10.1145/3180155.3180187.
- Bavota, Gabriele, et al. "The Half-Life of APIs: An Analysis of API Stability and Adoption in the Android Ecosystem." Proceedings of the 22nd ACM SIGSOFT International Symposium on the Foundations of Software Engineering, ACM, 2014, pp. 576–587, doi:10.1145/2597073.2597086.
- Candela, Gustavo, et al. "Sustainability Challenges of Large-Scale Digital Biodiversity Platforms." Journal of Digital Information, vol. 18, no. 2, 2017, pp. 1–16.
- ECMA International. ECMAScript® 2023 Language Specification. 14th ed., ECMA-262, June 2023.
- Green, Nicole, et al. "When Digital Gardens Become Ghost Towns: Platform Abandonment in Citizen Science." Proceedings of the iConference 2019, 2019, pp. 112–125. Larson, Gary. The Far Side. Cartoon depicting student saying "Mr. Osborne, may I be excused? My brain is full." 1982.
- Lau, Edwin, et al. "The Life and Death of JavaScript Frameworks." Proceedings of the 10th ACM Conference on Web Science, ACM, 2018, pp. 334–343, doi:10.1145/3236024.3275526.
- Petroski, Henry. "The Ballpoint Pen: A Study in Technological Resistance." American Heritage of Invention & Technology, vol. 8, no. 2, 1992, pp. 34–41.
- U.S. Department of the Treasury. "Historical Debt Outstanding - Annual." TreasuryDirect, 2024, fiscaldata.treasury.gov/datasets/historical-debt-outstanding/
- Vernon, Kayla, et al. "Technological Conservatism in Scientific Practice: A Case Study of Phylogenetic Software Adoption." Social Studies of Science, vol. 51, no. 3, 2021, pp. 412–438.
- W3C. "Web Platform Design Principles." World Wide Web Consortium, W3C Working Group Note, 15 Dec. 2021.
- WHATWG. "HTML Living Standard." Web Hypertext Application Technology Working Group, 2024, html.spec.whatwg.org/multipage/.