30 Years of Javascript on the Server
Javascript on the server, or ServerJS, as the community came to call it later, really should have been a success story from the very beginning. When Netscape introduced Javascript server-side as LiveWire in 1996, they got many things remarkably right.
The vision was elegant: use the same language on both client and server. We could write Javascript that validated a form field in the browser, then use the same Javascript to validate and process that same data on the server. This wasn't just convenient, it was conceptually powerful. The mental model was unified. The skills were transferable, the code could be shared.
LiveWire pioneered ideas that would become standard practice decades later. It automatically tracked per-user state (via client object), mapped database access to Javascript objects and their properties (via cursor.next() iteration), and had the ability to generate HTML markup dynamically on the server. The ability to embed server logic directly in pages would reappear in PHP, ASP, JSP, and eventually modern frameworks like Next.js and Fresh. Netscape understood that developers needed to generate HTML dynamically and maintain state across requests with tight integration of database access. They built those capabilities into LiveWire from the start.
Perhaps most impressively, LiveWire recognized that server-side code needed to be compiled and cached, not interpreted fresh on every request. The LiveWire compiler transformed Javascript into bytecode that the server could execute efficiently. This was sophisticated thinking for 1996.
Yet for all its technical foresight, Netscape made a fatal strategic error. Rather than releasing their server-side Javascript implementation as an open standard that could run on any web server such as Apache or IIS, they kept it exclusive to their own Netscape Enterprise Server, essentially trying to own the web. Instead of participating in it, they wanted to be the company that would dominate its future.
But the web was evolving according to different rules, based on principles of openness and accessibility. The technologies that would come to define it, like HTML, HTTP, and Javascript itself, succeeded because they were open standards that anyone could implement. By keeping LiveWire proprietary, Netscape prevented Javascript from catching on and dominating on the server from the start, as it did in the browser.
The 150,000 × Speed Increase in 30 Years
Netscape's first incarnation of Javascript on the server recognized the need for speedier code execution and added a build step that compiled the Javascript to bytecode. But web developers at the time wanted to be able to modify and serve web pages without the need to rebuild first. Modern Javascript runtimes like Deno and Bun inherit major performance gains that were made over the decades which eliminated this challenge.
Between 1995 and today, Javascript execution speed improved by approximately 150,000 times. This isn't a typo. Javascript engines became roughly 430 times faster through software optimization and hardware became roughly 350 times faster in single-threaded raw execution speed. The compound effect is staggering: 430 × 350 = over 150,000 times faster.
The breakthrough came with Just-In-Time compilation. Instead of interpreting source code line by line, JIT compilers would translate Javascript into actual machine code on the fly while the program was running. This advancement sparked fierce competition between Google's V8 Javascript engine, Mozilla's SpiderMonkey, and Apple's JavascriptCore with the Nitro engine, all adding more and more sophisticated optimizations. Although this essentially was a browser war, Javascript on the server became a major beneficiary.
Seeds for an Emerging Ecosystem
Although Brendan Eich created Javascript in just a few days in 1995 to quickly include it in the browser, the server-side ambitions were there from the very beginning. Eich envisioned a language that could run both in the browser and on the server, enabling developers to write code once and deploy it across the entire web stack. This wasn't an afterthought or a later pivot, it was part of the original design philosophy.
LiveWire's proprietary lock-in prevented a full ecosystem from emerging, with tooling, libraries, community support, and distribution mechanisms. Different Javascript projects on the server had various unique approaches to work around this in the following years.
WebCrossing went to solve this challenge with a distinct architectural vision: an integrated platform where Javascript and database storage were tightly coupled from the start. WebCrossing provided a web application platform with built-in NoSQL database capabilities and Javascript as scripting language. This was genuinely ahead of its time. The idea of a Javascript-native platform with integrated data persistence predates the modern "full-stack Javascript" movement by years. WebCrossing solved the lack of a tooling and libraries ecosystem by being extremely monolithic. It didn't just make the database data directly accessible as a traversable object-tree in Javascript, but gave direct scripting access to a full stack including SMTP, POP, IMAP, NNTP, FTP, XML-RPC and HTTP itself, all integrated in the same binary and directly scriptable, with all services interacting with the same content tree. WebCrossing showed that developers could build complete web applications in Javascript without needing to integrate separate database systems, needing external services or manage complex deployment pipelines. But like LiveWire, WebCrossing was a proprietary platform resulting in a small community. It proved the concept worked, but it couldn't generate the network effects necessary for mainstream adoption, because it was proprietary.
A very different architectural approach was taken by the Helma project, which ran Javascript on the Java Virtual Machine. This strategy had genuine merit. The JVM was mature, battle-tested, and supported by a massive enterprise ecosystem. If you could run Javascript on the JVM, you could theoretically leverage Java's threading model, its extensive standard library, its enterprise tooling, and its professional legitimacy. More importantly, the JVM offered something crucial: performance through HotSpot JIT compilation. This wasn't just about accessing Java's ecosystem—it was about getting serious runtime optimization for Javascript code.
Built on Mozilla's Rhino engine, Helma offered multi-threaded server-side Javascript with object-relational mapping capabilities, database integration, and a complete web application framework. It was architecturally sound, technically impressive, and genuinely production-ready. Developers could write Javascript code that accessed Java libraries, used Java's concurrency primitives, and deployed to any JVM-compatible server.
Helma compiled Javascript to Java bytecode, which then ran through the JVM's HotSpot compiler. This meant Helma benefited from the same kind of JIT compilation breakthroughs that would later make V8 revolutionary. HotSpot would profile the running bytecode, identify hot paths, and compile them to optimized machine code. For server workloads, where the same code paths execute repeatedly, this optimization was transformative. Helma wasn't a slow interpreted solution. It had serious performance optimization through one of the most mature JIT compilers in existence.
Helma's architecture made maximal use of Javascript's prototype object inheritance, implementing a clean Model, View, Control system, which you could keep in separate files. Javascript prototypes represented data and business logic, mapping to the database via Helma's built-in object-relational mapping (Model). These Javascript prototypes also had skin templates (Views) that were HTML components with placeholders and the ability to call macros, and prototype methods (Controls) that were "actions", processing requests or the macro handlers, callable from within the views. So, Helma had a very clear and explicit MVC-like structure, around five years before Ruby on Rails popularized the concept.
The e4xd version of Helma loaded the views as E4X objects. E4X, or ECMAScript for XML, was standardized as ECMA-357 in 2004. It allowed developers to write XML literals directly in Javascript code, treating XML as a native data type rather than strings to be parsed. Helma's E4X support was arguably superior to JSX, implemented 10 years later in React, solving essentially the same problem. Both gave the ability to have embedded markup directly in code, but JSX requires a build step. You have to transpile your JSX into regular Javascript before it can run. E4X didn't need that. It was a native Javascript language feature, integrated into the runtime. No build step, no transpiler, no configuration. The Rhino Javascript engine just understood XML literals the same way it understood string literals or array literals. E4X in Helma worked great to assemble and render HTML server-side, something Next.js with JSX came back to 10 years later.
While XML was a perfect fit for use as HTML templates, it was not ideal for data transport and interoperability, for which Javascript happened to have another native answer in the form of Javascript Object Notation. Douglas Crockford proposed to acknowledge JSON as a lightweight data interchange format standard in 2002. He saw that Javascript's native object syntax offered an efficient, human-readable, language-agnostic format for server-client communication. It was Javascript-native, meaning that parsing and generating JSON in Javascript required no external libraries or complex transformations. When using Javascript on the server, this was quite obvious and a natural match. Acceptance in other environments was more of an uphill battle, as XML had to be dethroned as universal data format. As every other language implemented JSON parsers and generators, servers written in any language started to communicate with Javascript clients using JSON. It became the universal data format for web APIs.
Solving Dependencies Sprouts the Ecosystem
In 2006, when jQuery popularized the use of closures in the browser, the Helma community started to experiment with ways to wrap Javascript library modules in closures as part of the module loading process, a concept that the Helma team fully embraced in a lower level architectural framework that became RingoJS. This module system approach later contributed to the formalization of CommonJS. When Mozilla's Kevin Dangoor wrote a blog post in 2009 encouraging ServerJS interoperability, the Helma community, amongst others, reached out to him and a group was formed, which collaborated to define the CommonJS module specification.
In lieu of having a native module system as part of the Javascript language standard, which was only to come about six years later, having CommonJS enabled Javascript environments to build an ecosystem of native libraries, much like what existed for Helma in the form of Java libraries. The require() function and module.exports pattern became standard across the entire server-side Javascript ecosystem.
The most explicit need for a proper native module system had Node.js, a project first presented by Ryan Dahl at JSConf EU in November 2009. Node.js chose CommonJS as its module system. Node's need for dependency handling and a module system was more dire because its aim was to make Javascript a first-class general-purpose server language, as opposed to just a web framework. As a registry of CommonJS modules and to automatically download them with all of their dependencies, Isaac Schlueter implemented NPM. As more packages appeared in the registry, it became easier to build complex applications by composing existing packages rather than writing everything from scratch. The "there's a package for that" culture emerged. Need to parse command-line arguments? There's a package. Need to make HTTP requests? There's a package. Need to validate email addresses? There's a package. The barrier to building sophisticated applications dropped dramatically. The ecosystem exploded exponentially. By 2015, NPM had hundreds of thousands of packages. By 2020, over a million.
The Pivot from JVM to Node
Before 2010, the JVM was the undisputed king of Javascript server-side development — mature, fast JIT, massive libraries. The arrival of Node marks a pivot point after which serverjs wasn't the most powerful and performant anymore in the JVM but outside of it.
Node came about at the moment where several innovations converged. Google's V8 engine suddenly made JavaScript fast with JIT compilation. JSON had become the universal data format. CommonJS offered a sane module system with NPM's exploding ecosystem. Node combined this with the switch from multi-threaded to single-threaded using an event loop for non-blocking I/O. Suddenly, writing performant web applications became possible outside the JVM. JavaScript went from being dismissed by mainstream developers as 'browser toy' to serious server platform even outside the JVM almost overnight.
Node embracing non-blocking I/O was heavily inspired by nginx, which had already proven that a single-threaded, event-driven model using libev could handle massive concurrency extremely efficiently. Instead of blocking a thread while waiting for I/O, you register a callback and move on. When the I/O completes, the callback executes. The event loop becomes the orchestrator: it processes events, executes callbacks, and never blocks. A single thread can handle thousands of concurrent connections because it's never waiting—it's always doing useful work or efficiently sleeping until the next event arrives. While nginx had proven the model's effectiveness for serving static content, applying it to application logic, where you're not just serving files but executing complex business logic with database queries and API calls, was different. Javascript happened to be a language where callbacks were natural, where asynchronous patterns were idiomatic, where the event-driven model felt like the right way to write code. With Node, serverjs was incredibly efficient at I/O-heavy work (the main job of web servers), allowing it to handle massive concurrency with low memory.
For the decade that followed, Node dominated Javascript on the server almost exclusively. During this time, the language itself matured dramatically. ES6, released in 2015 and officially called ES2015, brought classes, arrow functions, let and const for block scoping, destructuring, template literals, and Promises as first-class asynchronous primitives. In 2016 WebAssembly brought floating-point behavior (f32 and f64), memory safety and code reuse via libraries in Rust, C, C++ and Go amongst others. Async/await arrived in 2017, finally solving the promise-chain problem that had plagued developers. Subsequent years added top-level await, private fields, optional chaining, and nullish coalescing, each a small but meaningful improvement to developer ergonomics. And standardized modules, import and export syntax, provided a native way to organize code, though adoption would lag for years as the ecosystem kept using CommonJS without feeling the need to switch.
The Perfect Typescript Runtime
Javascript's dynamic nature, the fact that variables can hold any type and change type at runtime, was both a strength and a weakness. It made the language flexible and easy to learn, but it made large codebases hard to maintain. Refactoring was terrifying because you couldn't be sure what would break. IDEs couldn't provide good autocomplete or refactoring tools because they didn't know what types variables held. With the introduction of TypeScript came a solution: gradual typing. You could add type annotations to your Javascript code, and the TypeScript compiler would check those types at compile time, catching errors before runtime. But typing was optional, you could adopt it incrementally, adding types to the parts of your codebase where they provided the most value. The tooling was exceptional. IntelliSense provided accurate autocomplete. Refactoring tools could safely rename variables and functions across an entire codebase. By 2018, TypeScript had shifted from optional to essential in the ecosystem, but it was bolted on after the fact. You needed a compilation step, configuration files, type definitions for every package, and careful management of the boundary between typed and untyped code. It worked, but it felt like a workaround rather than a first-class feature.
At JSConf EU 2018 Ryan Dahl in his talk, "10 Things I Regret About Node.js", introduced Deno, which would set out to make Typescript a first class feature and address other shortcomings of Node. Dahl identified specific design decisions that made sense in 2009, but showed their age a decade later. The security model was one of them. Node.js gave programs full access to the filesystem and network by default. There was no sandboxing, no permission system. There was no Typescript available yet. Node made heavy use of call-backs, not using Promises from the beginning. Deno's design principles would directly address Node's limitations.
In Deno, security became the default posture rather than an afterthought. Deno programs run in a sandbox by default. If your code needs to read files, make network requests, or access environment variables, such permissions are required to be explicitly granted. Deno runs TypeScript directly. You write .ts files, and Deno handles the compilation transparently. Type checking is built into the runtime. The module system shifted to ES modules exclusively, using web-standard import syntax and URL-based imports. Instead of npm's centralized registry, Deno loads modules directly from URLs. You can import from any HTTP server, any CDN, any source that serves JavaScript files. Dependencies are cached locally after first download, but there's no package.json, no node_modules directory, no version resolution algorithm. In addition to supporting URL-based imports, the Deno team launched JSR, the JavaScript Registry, in 2024. Rather than a registry of tarballs like npm, JSR is TypeScript-native from the ground up. Packages publish their source directly, type information is built in, and documentation and type coverage scores are calculated automatically. Crucially, JSR packages work across runtimes: Node, Bun, and Deno alike. It's designed to be a registry for the entire serverjs ecosystem rather than a Deno-only replacement for npm.
Deno ships with built-in tooling such as a code formatter, a linter, a test runner, a documentation generator, all directly included in the single Deno executable. The Fresh framework, built on Deno, uses an islands architecture. The islands architecture in Fresh means that by default, your pages ship zero JavaScript to the client. The server renders HTML and sends it to the browser. If you need interactivity, a dropdown menu, a form with validation, a real-time update, you mark that component as an island. Only those islands ship JavaScript. The rest of the page is static HTML. This gives you the performance and SEO benefits of server-side rendering with the interactivity of client-side JavaScript, but only where you actually need it.
Fresh integrates Preact, a lightweight React alternative, for its component model, providing JSX syntax, component composition and hooks, but with a smaller runtime and better performance. The framework's approach to routing is file-based, which Next.js showed to be more intuitive than configuration-based routing. Like in Remix, co-locating data loading with components reduces complexity. SvelteKit had proven that component based progressive enhancement can be the sweet middle ground between client-side and server-side rendering, which informed Fresh's islands concept.
Next Heights on the Horizon
Deno and Fresh synthesize all the lessons learnt in 30 years of serverjs experience and offer in my opinion the currently best overall runtime and web framework for Javascript on the server. Of course, other runtimes have their legitimate use cases depending on the specific niche of a project. After all, we can now choose from an astonishingly large list of available Javascript runtimes.
Node.js remains the enterprise and legacy standard with unmatched ecosystem depth; Deno excels in security, TypeScript-first development, and modern defaults; Bun dominates when raw speed and all-in-one tooling are critical; Cloudflare Workers and other WinterCG edge runtimes shine for globally distributed, low-latency applications. For embedding JavaScript into custom applications, developers can choose from V8 (high-performance, widely used), GraalJS (excellent Java interop on the JVM), QuickJS (extremely lightweight), Boa (pure Rust, experimental), Hermes (mobile-optimized), or SpiderMonkey and JavaScriptCore for specific integration needs. Desktop apps are well served by Electron (Chromium + Node). That's a crazy amount of runtime options that no other language comes close to. While Java would get the honorable mention, the runner up regarding number of runtime implementations is actually closely related to the Javascript environments: WebAssembly and AssemblyScript.
WebAssembly (WASM) is a binary instruction format designed to run at near-native speed in web browsers and JavaScript runtimes. It's not a replacement for JavaScript, it's a complement. WASM modules compile from languages like Rust, C++, or AssemblyScript to portable bytecode that executes in a sandboxed environment. The performance characteristics are dramatic: cryptographic operations run 10-30x faster, numerical computations approach native C speeds, and memory-intensive operations avoid garbage collection overhead entirely.
The integration pattern is elegant. JavaScript handles application logic, user interaction, and orchestration. WASM handles the performance-critical operations that JavaScript can't do efficiently. A web application might use JavaScript for UI rendering and state management while delegating image processing to a WASM module compiled from Rust. A data analysis tool might use JavaScript for the API and visualization while running statistical computations in WASM. The boundary between the two is explicit and intentional.
AssemblyScript lowered the barrier to this integration significantly. Before AssemblyScript, using WASM meant learning Rust or C++, understanding their build toolchains, and managing the impedance mismatch between those languages and JavaScript. AssemblyScript provides a TypeScript-like syntax that compiles directly to WebAssembly. JavaScript developers can write code that looks familiar, with classes, functions, type annotations, and compile it to WASM without leaving the JavaScript ecosystem's mental model. It's not as performant as hand-optimized Rust, but it's far faster than JavaScript and dramatically more accessible.
Cryptographic operations are a clear case where resorting to WASM makes sense. Hashing, encryption, and signature verification are CPU-intensive and security-critical. Running these in WASM provides both speed and isolation. Image and video processing, numerical computing and scientific simulations represent other domains where WASM shines. The performance is often 20x or more.
In Deno, WASM modules load with the same ES module syntax as JavaScript: import { process } from "./image.wasm";. Deno's security model extends to WASM code, modules run in the same sandbox with the same permission system. So, WASM is a first-class citizen of the ecosystem, well integrated at the runtime level.
The Next Ten Years should be Interesting
Thirty years in, serverjs is in the best shape it has ever been. The runtimes are fast, the tooling is mature, and the ecosystem is converging rather than fragmenting. What I'm watching most closely is the AI layer, because Javascript happens to have genuine advantages for applications that tightly integrate AI agents and language models. The properties that made Javascript well-suited for I/O-heavy web servers turn out to make it equally well-suited for the emerging AI application layer. Large language models respond as token streams, not single payloads — and streaming has been idiomatic in Javascript since Node's early days. JSON, which Javascript treats as a native data type, is the lingua franca of LLM APIs. Async/await, which the language spent years refining, maps naturally onto the latency profile of model inference.
As developers build applications that orchestrate multiple AI models, chain tool calls, and stream results to users in real time, Javascript is becoming the dominant glue language of the AI stack. The Vercel AI SDK, LangChain.js, and a growing ecosystem of serverjs-native AI libraries reflect this. The combination of Deno's secure defaults, Fresh's progressive enhancement model, and maturing browser/edge inference via WebGPU + Wasm (now achieving 60–85% of native performance for quantized models on capable hardware) is particularly compelling. While Python remains dominant for core model research and heavy training workloads, JavaScript is increasingly the language of AI delivery — the layer that turns models into usable, responsive products.
Another area worth watching is AssemblyScript as a deterministic compilation target. By restricting JavaScript/TypeScript to a predictable subset that compiles to clean WebAssembly, it offers a potential bridge for bringing JavaScript-like developer experience into environments that demand strict determinism, especially for smart contracts. While still niche, the combination of TypeScript familiarity with near-native performance and reproducibility could open interesting new frontiers as blockchains begin to support the AssemblyScript and WASM combo for smart contracts more widely.
Oracles are a related and equally promising direction. Smart contracts are deterministic and isolated by design — they cannot reach the outside world on their own. Oracles bridge this gap by bringing real-world data onto the blockchain: prices, weather, sports results, API responses. Projects like XEQMLabs and DarkFi are exploring privacy-first oracle infrastructure, where the query content, requester identity, and purpose remain private by design, which could make the use of oracles much more popular. An oracle node needs to fetch from web APIs, parse and transform JSON responses, handle errors and edge cases, and deliver a verified result — which is a serverjs job description. Chainlink Functions makes this explicit: it runs JavaScript code on a decentralised oracle network, letting developers write standard JS that fetches from any HTTP endpoint and pipes the result on-chain. The entire web API ecosystem becomes accessible to smart contracts through a layer developers already know how to write. The determinism requirement ties back to AssemblyScript: multiple oracle nodes must independently arrive at the same result for consensus, which demands reproducible execution. AssemblyScript's WASM-compiling subset of TypeScript provides oracle logic that is both developer-friendly and verifiably deterministic. Deno's sandbox model maps naturally onto oracle node security requirements too, making it straightforward to restrict exactly which network endpoints and resources the oracle code is permitted to access.
As these pieces continue to mature — better GPU integration, stronger agent observability, and viable deterministic pathways like AssemblyScript — JavaScript will increasingly define interactive and orchestration layers for smart contracts, oracles, and AI.
The next ten years should be very interesting indeed.