Codetown ::: a software developer's community
All the tutorials and books for node.js seem to use Mongo as the database. I am not sold on 'document' databases and would like to know how difficult it is to use any version for plain old tried-and-true SQL with Node.js.
Does anybody have any experience in this area?
Tags:
One of the traditional knocks on JS is the volatility of doing sql queries from a interpreted script. Not to mention security. In other words, how do you regulate resource for results in a varying client environment. Node.js is supposed to provide a server side capability. However I would be skeptical of it its implementation of a transnational capability. A memento pattern, or ability to rollback transactions, at least until thoroughly tested. Given the fact that most discussions are coupled with no-sql db's is a clue as to what its intended usage should be. Perhaps caches for local search tools like solr. Easy to update, and rebuild, but less likely to be an efficient engine for individualized rdbms queries.
Codetown is a social network. It's got blogs, forums, groups, personal pages and more! You might think of Codetown as a funky camper van with lots of compartments for your stuff and a great multimedia system, too! Best of all, Codetown has room for all of your friends.
Created by Michael Levin Dec 18, 2008 at 6:56pm. Last updated by Michael Levin May 4, 2018.
Check out the Codetown Jobs group.

DuckDB Labs recently released DuckLake 1.0, a data lake format that stores table metadata in a SQL database rather than across many files in object storage. The first implementation is available as a DuckDB extension and includes catalog-stored small updates, improved sorting and partitioning options, and compatibility with Iceberg-style data features.
By Renato Losio
JobRunr has introduced ClawRunr, an open-source Java AI agent for scheduled, recurring, and one-off background tasks. Formerly JavaClaw, it runs on users' hardware and combines conversational interaction with persistent task execution, MCP tools, browser automation, and web, Telegram, and Discord channels, while using JobRunr for scheduling, retries, and monitoring.
By Diogo Carleto
Confluent introduces a new approach in Apache Kafka that moves schema IDs from message payloads to record headers, aiming to simplify schema governance and evolution. The update integrates with Schema Registry, improves compatibility across serialization formats, and reduces coupling between data and metadata in event-driven architectures.
By Leela Kumili
Meta has unveiled a new AI-driven capacity efficiency platform that uses unified AI agents to automatically detect and resolve performance issues across its global infrastructure, marking a significant step toward self-optimizing systems at hyperscale.
By Craig Risi
Hilary Mason shares her journey from academia to building AI products at scale. She discusses the shift from discrete engineering to probabilistic mindsets, explaining why managing "human considerations" is the hardest part of the stack. She explains the "existential crisis" for engineers, arguing that great architecture today is about context management, systems thinking, and good taste.
By Hilary Mason
© 2026 Created by Michael Levin.
Powered by