weitendorf 3 hours ago

I've been working on a hybrid protobuf ORM/generic CRUD server based on sqlite

(code at https://github.com/accretional/collector - forgive the documentation. I'm working on a container-based agent project and also trialling using agents heavily to write the individual features. It's working pretty well but the agents have been very zealous at documenting things lol).

This is my first real project using sqlite and we've hit some similarly cool benchmarks:

* 5-15ms downtime to backup a live sqlite db with a realistic amount of data for a crud db

* Capable of properly queueing hundreds of read/write operations when temporarily unavailable due to a backup

* e2e latency of basically 1ms for CRUD operations, including proto SerDe

* WAL lets us do continuous, streaming, chunked backups!

Previously I'd only worked with Postgres and Spanner. I absolutely love sqlite so far - would still use Spanner for some tasks with an infinite budget but once we get Collector to implement partitions I don't think I would ever use Postgres again.

  • bombela 2 hours ago

    > * 5-15ms downtime to backup a live sqlite db with a realistic amount of data for a crud db

    Did you consider using a filesystem with atomic snapshots? For example sqlite with WAL on BTRFS. As far as I can tell, this should have a decent mechanical sympathy.

    edit: I didn't really explain myself. This is for zero downtime backups. Snapshot, backup at your own pace, delete the snapshot.

matthewaveryusa 4 hours ago

The only caveat being this assumes all your data can fit on a single machine, and all your processing can fit on one machine. You can get a a u-24tb1.112xlarge with 448 vcores, 24TB RAM for 255/hour and attach 64TB of EBS -- that's a lot of runway.

  • tuhgdetzhh 3 hours ago

    Or rent a bare-metal machine from hetzner with 2-3x performance per core and 90% less costs[1].

    [1] Various HN posts regarding Hetzner vs AWS in terms of costs and perf.

    • tetha 3 hours ago

      In my experience, a decently managed database scales very hard.

      3x EX44 running Patroni + PostgreSQL would give you 64GB of working memory, at least 512 GB NVMe of dataset (configurable with more for a one-time fee) at HA + 1 maintenance node. Practically speaking, that would have carried the first 5 - 10 years of production at the company I work at with ease, for 120 Euros hardware cost/month + a decent sysadmin.

      I also know quite a few companies who toss 3-4x 20k - 30k at DELL every few years to get a database cluster on-prem so that database performance ceases to be a problem (unless the application has bad queries).

    • ethanwillis 2 hours ago

      This might be true in terms of direct monetary costs.

      I want to like Hetzner but the bureaucratic paper process of interacting with them and continuing to interact with them is just... awful.

      Not that the other clouds don't also have their own insane bureaucracies so I guess it's a wash.

      I'm just saying, I want a provider that leaves me alone and lets me just throw money at them to do so.

      Otherwise, I think I'd rather simply deploy my own oversized server in a colo even with the insanely overpriced hardware prices currently.

      edit: And shortly after writing this comment I see: "Microsoft won't let me pay a $24 bill, blocking thousands in Azure spending" https://news.ycombinator.com/item?id=46124930

      • earthnail 2 hours ago

        Can you elaborate on what the bureaucracy is you experienced? I'm a Hetzner customer since last month and so far I thoroughly enjoy it. Have not encountered any bureaucracy yet.

      • Nextgrid 2 hours ago

        > I want a provider that leaves me alone and lets me just throw money at them to do so.

        That’s been my experience with Hetzner.

        A lot of people get butthurt that a business dares to verify who they’re dealing with as to filter out the worst of the worst (budget providers always attract those), but as long as you don’t mind the reasonable requirement to verify your ID/passport they’re hands-off beyond that.

  • DenisM 3 hours ago

    Scale-up solves a lot of problems for stable workloads. But elasticity is poor, so you either live with overprovisinoed capacity (multiples, not percentages) or fail under spiky load which often time is the most valuable moment (viral traffic, Black Friday, etc).

    No one has solved this problem. Scale out is typically more elastic, at least for reads.

    • kragen 3 hours ago

      That's a good point, but when one laptop can do 102545 transactions per second, overprovisioned capacity is kind of a more reasonable thing to use than back when you needed an Amdahl mainframe to hit 100 transactions per second.

      • DenisM an hour ago

        As compute becomes cheaper your argument becomes more and more true.

        But it only works if workloads remain fixed. If workloads grow at similar rates you’re back to the same problem.

        • kragen an hour ago

          Well, it doesn't work for the newly added workloads. But for the most part we instead have the same workloads performed less efficiently.

    • CuriouslyC an hour ago

      I love hetzner for internal resources because they're not spikey. For external stuff I like to do co-processing, you can load balance to cloudflare/aws/gcp services like containers/Run/App Runner/etc.

  • adityaathalye 3 hours ago

    Heh, the documentation calls out the limits. Maximum (theoretical) DB size is 281TB: https://sqlite.org/limits.html

    > This particular upper bound is untested since the developers do not have access to hardware capable of reaching this limit.

    > However, tests do verify that SQLite behaves correctly and sanely when a database reaches the maximum file size of the underlying filesystem (which is usually much less than the maximum theoretical database size) and when a database is unable to grow due to disk space exhaustion.

  • kiitos 2 hours ago

    and that your application doesn't need to be resilient to host or network faults

  • embedding-shape 3 hours ago

    > The only caveat being this assumes all your data can fit on a single machine

    Does my data fit in RAM? https://yourdatafitsinram.net/

    Not sure using EC2/AWS/Amazon is a good example here, if you're squeezing for large single-node performance you most certainly go for dedicated servers, or at least avoid vCPUs like a plague.

    • jandrese 3 hours ago

      That site is a bit questionable. I entered "64TB" as the answer and it was very happy to show me a bunch of servers that maxed out at 6 or 8TB. Even the one server that listed 64TB of RAM might be questionable since it's not leaving room for the OS or your applications. That said 64 TB is a gargantuan amount of data, so I'm not too worked up over it not fitting in RAM. Lord help you if you have a power outage and have to reload the data from disk.

    • paulddraper an hour ago

      How does 25 TiB fit in RAM when the max machine has 24 TB?

diath 3 hours ago

> Hopefully, this post helps illustrate the unreasonable effectiveness of SQLite as well as the challenges you can run in with Amdahl's law and network databases like postgres.

No, it does not. This article first says that normally you would run an application and the database on separate servers and then starts measuring the performance of a locally embedded database. If you have to keep the initial requirement for your software, then SQLite is completely out of equation. If you can change the requirement, then you can achieve similar performance by tuning the local PGSQL instance -- and then it also becomes a valuation of features and not just raw throughput. I'm not saying SQLite is not an option either, but this article seems confusing in that it compares two different problems/solutions.

  • sethev 2 hours ago

    Right - but SQLite handily beats the case where postgres is on the same box as well. And it's completely reasonable to test technology in the configuration in which it would actually run.

    As an industry, we seem to have settled on patterns that actually are quite inefficient. There's no problem that requires the solution of doing things inefficiently just because someone said databases should run on a different host.

    • stickfigure 35 minutes ago

      If you're going to run on more than one piece of hardware, something is going to be remote to your single writer database.

      As an industry, we've generally decided against "one big box", for reasons that aren't necessarily performance related.

  • BugsJustFindMe 2 hours ago

    > If you have to keep the initial requirement for your software, then SQLite is completely out of equation.

    No it isn't? You can run a thin sqlite wrapping process on another server just fine. Ultimately all any DB service is, PostgreSQL included, is a request handler and a storage handler. SQLite is just a storage handler, but you can easily put it behind a request handler too.

    Putting access to sqlite behind a serial request queue used to be the standard way of implementing multi-threaded writes. That's only spitting distance away from also putting it behind TCP.

    • chasil 4 minutes ago

      As long as WAL mode is not enabled, connections over NFS/SMB or other file sharing protocols will work.

      I'm not saying that this is a good idea, and it could fail in a spectacular manner, but it can be done. DML over this is just asking for trouble.

    • Barathkanna an hour ago

      Exactly. People forget that “SQLite can’t do X” often really means “SQLite doesn’t ship with X built in.” If you wrap it with a lightweight request handler or a queue, you essentially recreate the same pattern every other DB uses. The fact that PostgreSQL bundles its own coordinator doesn’t make SQLite fundamentally incapable. It just means you choose whether you want that layer integrated or external.

    • formerly_proven 2 hours ago

      Well that's just dqlite/rqlite.

      • BugsJustFindMe 2 hours ago

        Dqlite and Rqlite are primarily for buildling fault-tolerant clusters. But if you just take the network access part, then ok sure, but also so what?

        • otoolep an hour ago

          rqlite[1] creator here.

          Nit: dqlite is a library, it is not a network-exposed database like rqlite is. Sure, it requires connecting to other nodes over the network, but local access is via in-process. In contrast one connects with rqlite over the network - HTTP specifically.

          [1] https://rqlite.io

  • andersmurphy 3 hours ago

    Paradoxically, raw throughput matters a lot more if you are going to scale on a single box. SQLite is 10x PG on a single box in this example. Considering databases tend to be the bottle neck that can take you an order of magnitude further. PG on the same server will also be slower the more complex the transaction as unix sockets are still going to be considerably slower than a function call.

    The other thing to point out is in this article is that the PG network example CANNOT scale horizontally due to the power law. You can throw a super cluster at the problem and still fundamentally do around 1000 TPS.

    • prisenco an hour ago

      Also important is just how fast cheap hardware has gotten which means vertical scaling is extremely effective. People could get a lot farther with sqlite in wal mode on a single box with an nvme drive than they imagine. Feels like our intuition has not caught up with the material reality of current hardware.

      And now that there are solid streaming backup systems, the only real issue is redundancy not scaling.

    • barfoure 2 hours ago

      > Paradoxically, raw throughput matters a lot more if you are going to scale on a single box.

      There’s absolutely nothing paradoxical about any of this.

  • embedding-shape 3 hours ago

    > If you have to keep the initial requirement for your software, then SQLite is completely out of equation.

    It'd be a very short article if so, don't you think? Full article would be something like: "Normally you'd have a remote connection to the database, and since we're supposed to test SQLite's performance, and SQLite is embedded, it doesn't compare. Fin"

    • stonemetal12 3 hours ago

      The table of data at the end of the article has 7 lines, only one has data for both DBs. What was the point of setting up the comparison if there is no comparison made?

      • andersmurphy 3 hours ago

        Because it shows that a network RDBS database cannot get you out of this predicament.

  • fulafel 2 hours ago

    What is says first is: "SQLite is for phones and mobile apps (and the occasional airliner)! For web servers use a proper database like Postgres!"

    Though I'd say it's for a broader set of applications than that (embedded apps, desktop apps, low-concurrency server apps etc).

    Phones and mobile apps installations of course outnumber web app deployments, and it doesn't say what you paraphrased about servers.

badmonster 3 hours ago

The real insight here is recognizing when network latency is your bottleneck. For many workloads, even a mediocre local database beats a great remote one. The question isn't "which database is best" but "does my architecture need to cross network boundaries at all?"

  • andersmurphy 3 hours ago

    (author here) yes 100% this. This was never mean't to be a SQLite vs Postgres article per say, more about the fundamental limitations of the network databases in some contexts. Admittedly, at times I felt I struggle to convey this in the article.

  • slashdave 2 hours ago

    Sure. Now keep everything in memory and use redis or memcache. Easy to get performance if you change the rules.

    • koakuma-chan 2 hours ago

      You can use SQLite for persistence and a hash map as cache. Or just go for Mongo since it's web scale.

tomconnors 2 hours ago

Cool stuff as usual, Anders. One of the nice things about running a networked DB is that it makes redeploying the application a bit simpler. You can spin up a new EC2 instance or whatever and once it's online kill the old one. That gets 0 or close to 0 downtime. If the DB is on the same instance, replacing it requires loading up the DB onto the new instance, which seems more error prone than just restarting the app on the original instance, but in my experience that typically incurs downtime or some complicated handoff logic. Have you had to deal with anything like that running sqlite in prd?

  • andersmurphy 2 hours ago

    Oh there are a bunch of considerations.

    You're going to want persistent storage on your server, not ephemeral. You'll also want NVME. A lot of the time you're going to end up on bare metal running a single server anyway.

    You're going to have down time for migrations unless you're very clever with your schema and/or replicas.

    Litestream for me at least is what makes SQLite viable for a web app as prior to that there wasn't a good replication story.

    With litestream it's much easier to have a backup on standby. That being said where I have used it in production some amount of downtime has been acceptable so mileage may vary.

    • kiitos an hour ago

      > You're going to have down time for migrations unless you're very clever with your schema and/or replicas.

      probably worth stating these kinds of design considerations/assumptions up-front

      i'm sure lots of applications are fine with "downtime for [database] migrations" but lots more are definitely not, especially those interested in synthetic metrics like TPS

janci an hour ago

How does SQLite handle HA setups? The minimum I want is reliable automatic failover in reasonable time for user-facing service. Ideally an active-active setup.

  • vhodges 17 minutes ago

    sqlite is just a library (in C)

    A few projects:

      * https://github.com/rqlite/rqlite Distributed, fault tolerant cluster
      * https://litestream.io/ Replication to S3 (or compatible) - more disaster recovery than fail over
      * https://fly.io/docs/litefs/ Same Author as litestream).  Distributed replication.  Requires writes to be redirected to the primary.
    
    I am debating Postgres vs sqlite (probably with litestream) for a project right now.

    And other than HW redundancy, I can get pretty far by scaling vertically on a single box. And for my app, I could probably (and my users!) live with some occasional downtime (as long as the data is replicated/backed up).

    If I get 20-50K users, it'll be a successful venture so I don't need much these days and it will be cheaper and easier to run as well.

    • otoolep a minute ago

      rqlite creator here, happy to answer any questions.

tomasol an hour ago

Author is setting PRAGMA synchronous="normal", meaning fsync is not issued as part of every write tx, but eventually. In order to make the comparison fair it should be set to "full".

  • andersmurphy 12 minutes ago

    PRAGMA synchronous="normal" is fine if you are in WAL mode. The database cannot be corrupted by power loss unlike in journal mode.

    > The synchronous=NORMAL setting provides the best balance between performance and safety for most applications running in WAL mode. You lose durability across power lose with synchronous NORMAL in WAL mode, but that is not important for most applications. Transactions are still atomic, consistent, and isolated, which are the most important characteristics in most use cases.

    • tomasol a minute ago

      fsync is the most expensive operation during a write. NORMAL mode means you don't care whether last ~100 ms of transactions before a process crash / VM restart are going to be persisted or not. My suggestion is either to use synchronous="full" or disable `synchronous_commit` on Postgres to avoid comparing apples to oranges.

andrewvc 34 minutes ago

The HN SQLite worship posts have gotten out of hand. What’s next a post on how appending to files is faster than Kafka?

It’s great that some people have workloads that this is a fit for. What’s more common is the use case managed databases like RDS etc solves for. You have some quantity of data you want to always be there, be available over a network for whatever app(s) need it and want backups, upgrades, access control etc solved for you.

I love SQLite and reach for it for hobby projects, but as a product for general business apps it is quite niche. It has the qualities that make for any popular product on HN, a great getting started experience and a complex maintenance and operational experience.

jaketoronto an hour ago

Are you limiting your # of connections to postgres to 8? Is this unnecessarily throttling your throughput? This seems like quite the bottleneck... connection pools are good when your app is overwhelming your db.. but in this case, you really should be trying to put more load on Postgres... I'm concerned that this whole experiment is tainted by this choke point. I would love to see this tested again with a much larger connection pool. (Also, might be nice for you to share what the CPU and thread usage on the Postgres side was during this). (Side note: I'm a big fan of sqlite.. but I've run into lots of performance bottlenecks caused by arbitrarily low connection pools and the like)

  • jaketoronto an hour ago

    To further explain:

    You mention setting the conn pool to 8 to match your # of cores. That would be fine if you didn't have any sleeps inside of your txns... But the moment you added the sleeps inside the txns, your limit of 8 kills through throughput... because no other thread can access the DB once 8 of them grab connections and start the 20ms of total sleep. Imagine instead if you had 64 connections... you would 8x your throughput... What if you were to go even higher? At some point you might start overloading the DB... at that point, you could consider tuning the db to accept more connections... or... maybe you've truly reached the DB's peak performance limit.

    I just don't think that 8 connections represents that limit... you need to do everything you can to open up your client config until you reach PG's limitations.

alexwennerberg 3 hours ago

Does anyone have examples of organizations that have leveraged SQLite and written about their experience? I've read a lot of theory and benchmarks about it lately and it seems extremely impressive, but I'm wondering if anyone has written about pushing it to its limits "in production"

ducktective an hour ago

Does anyone have rough numbers (max daily users etc) on viability of SQLite vs PostgreSQL for a typical user-facing webapp or e-commerce application?

I know due to some recent update, SQLite can support concurrent reads but still only a single writer. For which cases this would be a problem?

Some recommend it's better to start with postgres anyway if you have any remote thoughts of scaling in mind....

bob1029 an hour ago

This is great until you encounter a customer with a hard RPO requirement of 0. SQLite has a few replication options, but I would never trust this in high stakes domains over PGSQL/MSSQL/DB2/Oracle/etc.

  • BenjiWiebe 41 minutes ago

    I'm curious, is an RPO of 0 truly expected or needed? I can easily believe that some places would "require" it.

    What kind of data is so critical that the data from a quarter second before catastrophic destruction must be saved?

    I guess weapons testing, at least... But that wouldn't be streaming data of that importance for a very large % of time.

pmdulaney 3 hours ago

Prefer "surprising" to "unreasonable".

  • jhoechtl 3 hours ago

    But that wouldn't be cool

jcadam an hour ago

I've been getting some good results from sqlite-vec on my current project. I need to look at geospatial extensions next.

shadowgovt 34 minutes ago

Good. I feel like people keep discovering this principle that "When the engine does far, far fewer things, it's faster," but I certainly won't bash on people learning this, however they get there.

gausswho an hour ago

SQLite db's do not reliably survive Syncthing across multiple devices. Until that's true I am always seeking a DB design that does.

westurner 3 hours ago

That's a helpful TPS Report.

TIL `SAVEPOINT` can occur in a BEGIN ... END SQLite transaction, and that works with optimizing batch size on a particular node with a given load.

Is there a solution for SQLite WAL corruption?

From https://news.ycombinator.com/item?id=45133444 :

> "PSA: SQLite WAL checksums fail silently and may lose data" https://news.ycombinator.com/item?id=44672902

> sqlite-parquet-vtable, [...]

  • matharmin 3 hours ago

    As mentioned in those threads, there is no SQLite WAL corruption if you have a working disk & file system. If you don't, then all bets are off - SQLite doesn't protect you against that, and most other databases won't either. And nested transactions (SAVEPOINT) won't have have any impact on this - all it does in this form is reduce the number of transactions you have.

    • westurner 3 hours ago

      > working disk & file system

      And a working ECC or non-ECC RAM bus, and [...].

      How bad is recovery from WAL checksum / journal corruption [in SQLite] [with batching at 100k TPS]?

      And should WAL checksums be used for distributed replication "bolted onto" SQLite?

      >> (How) Should merkle hashes be added to sqlite for consistency? How would merkle hashes in sqlite differ from WAL checksums?

      SQLite would probably still be faster over the network with proper Merkleization

dmezzetti 3 hours ago

I've used SQLite as the content storage engine for years with TxtAI. It works great. Also plenty of good add-ons for it such as sqlite-vec for storing vectors. It can take you pretty far and maybe it's all you need in many circumstances.

koakuma-chan 2 hours ago

Alternative title: SQLite is All You Need

busymom0 3 hours ago

Previously, I had always used Postgres for database and Rust or NodeJS for my backend. For my new website (https://limereader.com/), I used Swift for my backend, SQLite for Database, Vapor for web server in the Swift app and am self-hosting the site on an old Mac mini.

A sqlite related issue I ran into had to do with accessing the SQLite database from multiple threads. Found out a solution easily: for multi-threading use, SQLite needed to be initialized with a `SQLITE_OPEN_FULLMUTEX` flag. Since then, the website has been running seamlessly for about 3 weeks now.

  • wmanley 2 hours ago

    Use a connection per-thread instead. By sharing a connection across threads you’ll be limiting concurrency - and transactions won’t work as you’d expect. SQLite connections are not heavy.

    Also: use WAL mode and enable mmap.

  • maxmcd 3 hours ago

    This will block threads while waiting for other threads to write. That might work great for your threading model but I usually end up putting the writer in one thread and then other threads send writes to the writer thread.

    • busymom0 2 hours ago

      I do open 2 connections:

      First one for writing with flags:

          SQLITE_OPEN_CREATE | SQLITE_OPEN_READWRITE | SQLITE_OPEN_FULLMUTEX
      
      Second one for reading with flags:

          SQLITE_OPEN_READONLY | SQLITE_OPEN_FULLMUTEX
      
      As you can note, I have SQLITE_OPEN_FULLMUTEX on both of them. Should I only have it for the writing one?
  • andersmurphy 2 hours ago

    You don't need fullmutex if you manage your connections correctly at the application level. I.e ensure each connection is only used from a single thread at a time. I also highly recommend having an MPSC queue for your batch/writes and make them go through a single connection so you don't have to deal with SQLITE_BUSY or SQLITE_LOCKED.

Tractor8626 an hour ago

Sqlite is very cool. But what was the point of slowing postgress down?

> But, wait our transactions are not serialisable, which they need to be if we want consistent transaction processing

You either don't know what serializable does or trying to mislead the reader. There is zero reason to use searializable here.

> Let's say you have 5ms latency between your app server and your database.

5ms latency is unrealistic. Unless you use wifi or you database is in another datacenter.

> I'm talking about transactions per second, specifically interactive transactions that are common when building web applications

No they are not common at all. You probably invented them just to make pg look bad.

  • andersmurphy an hour ago

    > You either don't know what serializable does or trying to mislead the reader. There is zero reason to use searializable here.

    If you're processing financial transactions you want your isolation level to be serialisable. As the order in which the transactions are processed matters.

    > 5ms latency is unrealistic. Unless you use wifi or you database is in another datacenter.

    Even with 1ms latency. Amdahl's law will still make you cap out at a theoretical 1000 TPS if you have 100% row lock contention.

    > No they are not common at all. You probably invented them just to make pg look bad.

    I'm confused. I invented transactions? Are you saying you don't use transactions with rollback when you use PG?