One thing I don't usually see addressed with the pure-sql approaches is how to handle dynamic query building. The most common example being large configurable forms that display a data grid. Kysely[1] does a good job of starting from this angle, but allowing something like specifying the concrete deserialization type similar to the libraries here.
I'm a big fan of sql in general (even if the syntax can be verbose, the declarative nature is usually pleasant and satisfying to use), but whenever dynamic nature creeps in it gets messy. Conditional joins/selects/where clauses, etc
How do folks that go all in on sql-first approaches handle this? Home-grown dynamic builders is what I've seen various places I've work implement in the past, but it's usually not built out as a full API and kind of just cobbled together. Eventually they just swap to an ORM to solve the issue.
it's not (really) addressed by sqlx (intentionally), in the same way most ORM features are not addressed
but to some degree this is what is so nice about sqlx it mainly(1) provides the basic SQL functionality and then let you decide what to use on top of it (or if to use anything on top).
If you need more e.g. the sea-* ecosystem (sea-query, sea-orm) might fulfill you needs.
(1): It can compile time check "static" queries (i.e. only placeholders) which is a bit more then "basic" features, but some projects have to 99+% only static queries in which case this feature can move SQLx from "a building block for other sql libs" to "all you need" to keep dependencies thinner.
One approach is to create views for the required data and then just select the columns which are needed. The joins will be pruned by the query planner if they are not needed, so there is no need for conditional joins.
Not rust, but I've been a pretty big fan of Dapper and Dapper.SqlBuilder in the C# space... have used it with MS-SQL and PostgreSQL very effectively, even with really complex query construction against input options.
I find that interpolating strings works pretty well for this use case (which actually switchd TO string interpolation from ORMs at a previous job of mine).
But this is conditional on either your database or your minimal abstraction layer having support for bindings arrays of data with a single placeholder (which is generally true for Postgres).
SeaQuery looks like a similar dynamic query builder for Rust as Kysely is for JS/TS, so yeah, that'd probably solve the dynamic query problem. But I think parent wasn't so much asking for another library but for patterns.
How do people who choose to use a no-dsl SQL library, like SQLx, handle dynamic queries? Especially with compile-time checking. The readme has this example:
...
WHERE organization = ?
But what if you have multiple possible where-conditions, let's say
"WHERE organization = ?", "WHERE starts_with(first_name, ?)", "WHERE birth_date > ?",
and you need to some combination of those (possibly also none of those) based on query parameters to the API. I think that's a pretty common use case.
I agree with you that dynamic query building can be tedious with a pure SQL approach.
The use case you are describing can be solved with something alone the lines of:
WHERE organization = $1
AND ($2 IS NULL OR starts_with(first_name, $2)
AND ($3 IS NULL OR birth_date > $3)
With SQLx you would have all the params to be Options and fill them according the parameters that were sent to your API.
That's relying a lot on the DB engine, which will struggle as the condition gets more complex. I've had MySQL make stupid choices of query plans for very similar queries, I had to break the OR into UNIONs
I think the dynamic part is where the clauses themselves are optional. For example, say you have a data table that a user can filter rows using multiple columns. They can filter by just `first_name` or by `birth_date` or both at the same time using AND / OR, and so on. So you’re dynamically needing to add more or less “WHERE” clauses and then it gets tricky when you have to include placeholders like `$1` since you have to keep track of how many parameters your dynamic query is actually including.
I generally avoid DSLs as they don't bring much... except for this exact use-case. Dynamic queries is pretty much what a query builder is for: you can avoid a dependency by rolling your own, but well it's not trivial and people out there have built some decent ones.
So, if I have this use-case I'd reach for a query builder library. To answer the question of "how to do dynamic queries without a query builder library", I don't think there's any other answer than "make your own query builder"
in general sqlx only provides the most minimal string based query building so you can easily run into annoying edge cases you forgot to test, so if your project needs that, libraries like sea-query or sea-orm are the way to go (through it's still viable, without just a bit annoying).
in general SQLx "compile time query checking" still needs a concrete query and a running db to check if the query is valid. It is not doing a rem-implementation of every dialects syntax, semantics and subtle edge cases etc. that just isn't practical as sql is too inconsistent in the edge cases, non standard extensions and even the theoretical standardized parts due to it costing money to read the standard and its updates being highly biased for MS/Oracle databases).
This means compile time query checking doesn't scale that well to dynamic queries, you basically would need to build and check every query you might dynamically create (or the subset you want to test) at which point you are in integration test territory (and you can do it with integration tests just fine).
besides the sqlx specific stuff AFIK some of the "tweaked sql syntax for better composeability" experiments are heading for SQL standardization which might make this way less of a pain in the long run but I don't remember the details at all, so uh, maybe not???
---
EDIT: Yes there is an sqlx "offline" mode which doesn't need a live db, it works by basically caching results from the online mode. It is very useful, but still no "independent/standalone" query analysis.
I've been using sqlx with postgres for several months now on a production server with decent query volume all day long. It has been rock solid.
I find writing sql in rust with sqlx to be far fewer lines of code than the same in Go. This server was ported from Go and the end result was ~40% fewer lines of code, less memory usage and stable cpu/memory usage over time.
It has the advantage that it implements the parsing and type checking logic in pure Go, allowing it to import your migrations and infer the schema for type checking. With SQLx you need to have your database engine running at compile time during the proc macro execution with the schema already available. This makes SQLx kind of a non-starter for me, though I understand why nobody wants to do what sqlc does (it involves a lot of duplication that essentially reimplements database features.) (Somewhat ironically it's less useful for sqlc to do this since it runs as code generation outside the normal compilation and thus even if it did need a live database connection to do the code generation it would be less of an impact... But it's still nice for simplicity.)
Maintainer of sqlc here. Thanks for the kind words! I'm considering switching to the sqlx model of talking to a running database simply because trying to re-implement PostgreSQL internals has been a huge challenge. It works for most queries, but for the long tail of features, it's a losing battle.
It's possible to run sqlx in 'offline' mode that uses your schema to do the checks so you don't need a live database. That's a popular option in CI/CD scenarios.
It's absolutely core to SQLx. I'm surprised to hear that that isn't widely known based on the parent. The first time I used SQLx has to be 4 or 5 years ago and they had it back then.
Well, it hurts that it isn't the default. The README still tells you to set the environment variable, it just isn't the "default" way to do things. In my opinion it would be better to entirely remove support for connecting to the database during compilation. Does anyone actually want to use it that way?
Comparing and contrasting, sqlc type checking happens via code generation, basically the only option in Go since there's nothing remotely like proc macros. Even with code generation, sqlc doesn't default to requiring an actual running instance of the database, though you can use an actual database connection (presumably this is useful if you're doing something weird that sqlc's internal model doesn't support, but even using PostgreSQL-specific features I hadn't really ran into much of this.)
I think that, if a new user is going to encounter an error, it should be that SQLx couldn't talk to the database rather than that a mysterious file doesn't exist. They're going to need to connect to a dev database either way. They can learn about caching the schema information when they come to those later steps like building a CI pipeline. Early in a project, when your queries and schema are unstable, caching isn't going to be very useful anyway, since you'll be invalidating it constantly.
The sqlc authors are to be applauded for making a static analyzer, that is no small feat. But if you can get away with offloading SQL semantics to the same SQL implementation you plan to use, I think that's a steal. The usability hit is basically free - don't you want to connect to a dev database locally anyway to run end to end tests? It's great to eliminate type errors, but unless I'm missing something, neither SQLx nor sqlc will protect you from value errors (eg constraint violations).
1. I can't tell you how unconvinced I am with the error being less confusing. A good error message tells you what's wrong and ideally what to do to remedy it if possible... and to me there isn't really a practical difference between "set this environment variable" and "run this command". It seems like you basically add one extra step, but you prevent people from choosing a probably suboptimal workflow that they almost certainly don't want to use anyways... Either way, I don't think it's more confusing, and for someone new it's better to only have one way to do something, especially if it's the obviously superior thing anyways.
2. Sure, the database will probably be running locally, when you're working on database stuff. However, the trouble here is that while I almost definitely will have a local database running somehow, it is not necessarily going to be accessible from where the compiler would normally run. It might be in a VM or a Docker container where the database port isn't actually directly accessible. Plus, the state of the database schema in that environment is not guaranteed to match the code.
If I'm going to have something pull my database schema to do some code generation I'd greatly prefer it to be set up in such a way that I can easily wrap it so I can hermetically set up a database and run migrations from scratch so it's going to always match the code. It's not obvious what kinds of issues could be caused by a mismatch other than compilation errors, but personally I would prefer if it just wasn't possible.
The error message is a fair point, I do still think that making caching the default is premature.
I would definitely recommend writing a Compose file that applies your migrations to a fresh RDBMS and allows you to connect from the host device, regardless of what libraries you're using. Applying your migrations will vary by what tools you use, but the port forwarding is 2 simple lines. (Note that SQLx has a migration facility, but it's quite bare bones.)
This is not quite the same thing, because it requires `sqlx prepare` to be run first; and that talks to the database to get type information. In SQLC, on the other hand, query parsing and type inference is implemented from first principles, in pure Go.
sqlc's approach has its limitations. Its SQLite query parser is generated from an ANTLR grammar, and I've encountered situations where valid SQLite syntax was rejected by sqlc due to their parser failing.
Type inference was okay, since SQLite barely has any types. The bigger issue I had was dealing with migration files. The nice part about SQLx is that `cargo sqlx database setup` will run all necessary migrations, and no special tooling is necessary to manage migration files. sqlc, on the other hand, hard codes support for specific Go migration tools; each of the supported tools were either too opinionated for my use case or seemed unmaintained. SQLx has built-in tooling for migrations; it requires zero extra dependencies and satisfies my needs. Additionally, inferring types inside the actual database has its benefits: (1) no situations where subsets of valid query syntax are rejected, and (2) the DB may be used for actual schema validation.
For an example of why (2) may be better than sqlc's approach: databases like SQLite sometimes allow NULL primary keys; this gets reflected in SQLx when it validates inferred types against actual database schemas. When I last used sqlc, this potential footgun was never represented in the generated types. In SQLx, this footgun is documented in the type system whenever it can detect that SQLite allows silly things (like NULL primary keys when the PK satisfies certain conditions).
Implementing the parsing and type checking logic in pure Go is not an unqualified advantage. As you point out, it means that SQLC "...essentially reimplements database features..." and in my experience, it does not reimplement all of them.
> with SQLx you need to have your database engine running at compile time during the proc macro execution with the schema already available.
FWIW, the compile-time query checking is entirely optional. If you don't use the query syntax checking then you don't need live database and you don't need `sqlx prepare`.
I never gelled with how SQLC needs to know about your schema via the schema file. I'm used to flyway where you can update the schema as long as it's versioned correctly such that running all the sets of flyways will produce the same db schema.
I referred go-jet since it introspects the database for it's code generation instead.
The way I prefer to use sqlc is in combination with a schema migration framework like goose. It actually is able to read the migration files and infer the schema directly without needing an actual database. This seems to work well in production.
I spent 2 weeks trying to build a very basic rest crud API with SQLc and it was not better. I had to shift to SQLx because of how unintuitive SQLc was.
We've been running SQLC in production for a while now and I'm curious which part of it you found unintuitive? We run ours as a container service within the development environment that will compile your code from a postgres dump file. We've had no issues with it at all after the initial configuration guidelines for SQLC, though the documentation certainly isn't exactly great. Hell, I'm not sure I've ever worked with a better SQL to language tool in my decades so I'm surprised that it isn't working out for you.
That being said, as I understand it, SQLx does something very different. If you want dynamic queries, you'll basically have to build that module yourself. The power of SQLC is that anyone who can write SQL can work on the CRUD part of your Go backend, even if they don't know Go. Hell, we've even had some success with business domain experts who added CRUD functionality by using LLM's to generate SQL. (We do have a lot of safeguards around that, to make it less crazy than it sounds).
If you want fancy Linq, grapQL, Odata or even a lot of REST frameworks, you're not getting any of that with SQLC though, but that's typically not what you'd want from a Go backend in my experience. Might as well build it with C# or Java then.
It's quite simple really. I want to write a query and have a concrete object as it's return type. The framework that gets me there in the least amount of steps is going to be more intuitive.
Let's compare:
SQLC
- configuration file (yaml/json)
- schema files
- query files
- understand the meta language in query file comments to generate code you want
SQLx
- env: DATABASE_URL
Now does that mean that SQLx is the best possible database framework. No, it does not. Because I didn't spend my time doing things that weren't related to the exact queries I had to write I got more work done.
I want to appreciate the hard work the SQLx Devs have put in to push the bar for a decent SQL developer experience. People give them a really hard time for certain design decisions, pending features and bugs. I've seen multiple comments calling it's compile time query validation "gimmicky" and that's not nice at all. You can go to any other language and you won't find another framework that is as easy to get started with.
> SQLC - configuration file (yaml/json) - schema files - query files - understand the meta language in query file comments to generate code you want
I would recommend using pg_dump for your schema file which means it'll not be related to SQLC as such. This way it will be easier for you to maintain your DB, we use Goose as an example. In our setup part of the pipeline is that you write your Goose migration, and then there is an automated process which will update the DB running in your local dev DB container, do a pg_dump from that and then our dev container instance of SQLC will compile your schema for you.
The configuration file is centralized as well, so you don't have to worry about it.
I agree with you on the SQLC meta language on queries, I appreciate that it's there but we tend to avoid using it. I personally still consider the meta language a beter way of doing things than in-code SQL queries. This is a philosophical sort of thing of course, and I respect that not everyone agres with me on this. It's hard for me to comment on SQLx, however, as I haven't really used it.
What I like about SQLC is that it can be completely de-coupled from your Go code.
Maybe I'm drinking the sqlc Kool aid, but because I'm already using migration files, setting up the config to point to them and a folder of SQL queries was pretty painless.
And of course now that I have it, the incremental cost of adding a new query is really low as well
That's all understandable. But like I said I did spend 2 weeks working with SQLc, however when I compared it to just writing the query in my code, the developer experience was miles apart.
You could compare it to people writing CSS, JavaScript and Markup in separate files Vs having just one file in React/Svelte etc. which gives the user the option to combine everything into one.
There maybe a lot of drawbacks from the latter approach but it's makes everything a hell easier for people to just get started building.
Interesting - I've had the opposite experience. I usually prefer rust for personal projects, but when I recently tried to use SQLx with sqlite, lots of very basic patterns presented problems, and I wished I had sqlc back.
This is why I like using NodeJS or Python with SQL, it's very simple to have it not care about the return types. SQL is already statically typed per se, I don't need to re-assert everything. Achieving the same kind of automation in Go etc requires parsing the schema at compile-time like what you described, which is complicated.
Go's verbose error handling certainly impacted the vertical height of files (lots of early returns), but wasn't a big contributor to overall LoC.
The more serious LoC offenders in Go were:
1. Marshalling/Unmarshalling code (for API responses, to/from external services, etc). In general, working with JSON in Go was painful and error prone. Rust's serde made this a complete non-issue.
2. Repetitive sql query code (query, scan for results, custom driver code for jsonb column marshalling/unmarshalling). Rust's sqlx made this a non-issue.
3. Go's use of context to share data through handlers was a real pain and error prone (type casting, nil checks, etc). Rust's actix-web made this a real beautiful thing to work with. Need a "User" in your handler? Just put it as an argument to the handler and it's only called if it's available. Need a db connection? Just put it as an argument to the handler.
4. Go's HTML/Text templates required more data to be passed in and also required more safety checks. Rust's askama was overall more pleasant to use and provided more confidence when changing templates. In Rust, I'd catch errors at compile time. In Go, I'd catch them at runtime (or, a user would).
I must admit I was surprised. I thought Rust would have been more lines of code because it's a lower level language, but it ended up being ~40% less code. My general sentiment around working with the code is very different as well.
In the Rust codebase I have no hesitation to change things. I am confident the compiler will tell me when I'm breaking something. I never had that confidence in Go.
Hm. I've used Rust a lot more than Go, so this is secondhand to me. I know that generics are iffy and nullness is annoying. If you're paying for static types in Go and still not getting the guarantees, that really bites.
I have used this as well as many of the other lower-level db drivers (which don't check your SQL at compile time) and I can say I much prefer the latter.
My issues with SQLx when I first tried it were that it was really awkward (nigh impossible) to abstract away the underlying DB backend, I expect those issues are fixed now but for some simple apps it's nice to be able to start with SQLite and then switch out with postgres.
Then I wanted to dockerize an SQLx app at one point and it all becomes a hassle as you need postgres running at compile time and trying to integrate with docker compose was a real chore.
Now I don't use SQLx at all. I recommend other libraries like sqlite[1] or postgres[2] instead.
SQLx is a nice idea but too cumbersome in my experience.
I'm have no experience with abstracting away the backend, but Dockerizing is actually pretty easy now - there's an offline mode[1] where you can have sqlx generate some files which let it work when there's no DB running.
It's definitely not perfect, but I think both of those issues are better now, if not fully solved.
For needing a DB at compile time, there's an option to have it produce artefacts on demand that replace the DB, although you'll need to connect to a DB again each time your queries change. Even that is all optional though, if you want it to compile time check your queries.
I know it's annoying (and apparently there is a solution for generating the required files before the build), but in these kinds of situations Go and Rust are great for doing a static build on the system and then copying into a scratch image.
Versus Python and Node often needing to properly link with the system they'll actually be running in.
Why would you want to abstract away the underlying database?
Wouldn't it better to already use the target DB to cattch potential issues earlier? Also to avoid creating another layer of indirection, potentially complecting the codebase and reducing performance?
Primarily for libraries and deployment environments that aren't fully in your control which is still pretty common once you get to B2B interactions, SaaS is not something you can easily sell to certain environments. Depending on the assurance you need, you might even need to mock out the database entirely to test certain classes of database errors being recoverable or fail in a consistent state.
Even in SaaS systems, once you get large enough with a large enough test suite you'll be wanting to tier those tests starting with a lowest common denominator (sqlite) that doesn't incur network latency before getting into the serious integration tests.
Thanks, interesting experience - so much depends on getting developer ergonomics right. There is something to be said for checking the SQL at compile-time, though - esp. if trying to ORM to a typesafe language.
How long ago did you try SQLx? Not necessarily promoting SQLX, but the `query_as` which lets one make queries without the live database macro has been around for 5 years [1].
For lower level libraries there is also the more downloaded SQLite library, rusqlite [2] who is also the maintainer of libsqlite3-sys which is what the sqlite library wraps.
The most pleasant ORM experience, when you want one, IMO is the SeaQl ecosystem [3] (which also has a nice migrations library), since it uses derive macros. Even with an ORM I don't try to make databases swappable via the ORM so I can support database-specific enhancements.
The most Rust-like in an idealist sense is Diesel, but its well-defined path is to use a live database to generate Rust code that uses macros to then define the schema-defining types which are used in the row structs type/member checking. If the auto-detect does not work, then you have to use its patch_file system that can't be maintained automatically just through Cargo [4] (I wrote a Makefile scheme for myself). You most likely will have to use the patch_file if you want to use the chrono::DateTime<chrono::Utc> for timestamps with time zones, e.g., Timestamp -> Timestamptz for postgres. And if you do anything advanced like multiple schemas, you may be out of luck [5]. And it may not be the best library for you if want large denormalized tables [6] because compile times, and because a database that is not normalized [7], is considered an anti-pattern by project.
If you are just starting out with Rust, I'd recommend checking out SeaQl. And then if you can benchmark that you need faster performance, swap out for one of the lower level libraries for the affected methods/services.
Quite similar to manifold-sql[1], which is arguably better integrated into Java than SQLx is into Rust. Inline native SQL in Java is *inherently type-safe*, no mapping -- query types, query results, query parameters all projected types at compile-time.
int year = 2019;
. . .
for(Film film: "[.sql/] select * from film where release_year > :rel_year".fetch(year)) {
out.println(film.title);
}
sqlx is my favorite way of working with databases in Rust hands down.
I've tried alternatives like Diesel and sea-orm. To be honest, I feel like full-blown ORMs really aren't a very good experience in Rust. They work great for dynamic languages in a lot of cases, but trying to tie in a DB schema into Rust's type system often creates a ton of issues once you try to do anything more than a basic query.
It's got a nice little migration system too with sqlx-cli which is solid.
SQLx is great, but I really wish they had a non-async interface. I had to switch a project from sqlx to rusqlite seemingly just due to the overhead of the async machinery. Saw a 20x latency reduction that I narrowed down to "probably async" (sort of hard to tell, I find it very difficult to do perf analysis of async code).
I try to avoid discussing async so as to not come off as a frothing-at-the-mouth-chest-thumping-luddite but honestly, if sqlx had a non-async interface I'd be very happy to accept the "you don't need to use it" argument. its the only place where I don't feel like I really have a choice.
Async does not incur 20x slowdowns when you're I/O bound. It would be ridiculous for copying a few bytes to be slower than a syscall. This sounds like mutex issues, or WAL config, or something like that.
SQLx is great, but I had a long laundry list of issues with its SQLite support so I forked it into a focused SQLite-specific library. It has now diverged very far from SQLx, and the number of small inaccuracies and issues we fixed in the low-level SQLite bindings is well into the dozens. The library is unannounced, but is already being used in some high-throughput scenarios.
As a total outsider to sqlx, those issues don’t surprise me: any application on any platform that uses a SQLite in-memory DB concurrently is likely to violate many assumptions made by client-side connection pooling tools. In-memory SQLite is a great tool, but using it indirectly behind a connection pooler that assumes the database is external to the current process is bound to cause problems.
SQLx and F# type-providers are probably the best developer experience for writing database access code. I wish more languages had something equivalent.
I think this sort of stuff only comes after a LOT of experience with building SQL db backed systems - it resonated with me immediately. (I'm the OP but not affiliated with this Rust project at all).
I first went to sqlx thinking it would be like JOOQ for Rust, but that wasn't the case. It's a pretty low-level library and didn't really abstract away the underlying DBs much, not to mention issues with type conversions. We've since just used rust-postgres.
To expand, SQLx isn't an ORM or query builder, what it does is allow you to write raw SQL with compile-time guarantees of type safety. It does this by connecting to a dev database at compile time & uses SQL's introspection features (specifically, by preparing a statement[1]) to analyze your queries. (It can also cache this information to check without a database available, and has a basic migration facility.)
I've used SQLx for a couple of projects (MariaDB and SQLite) its good, it does the thing though it takes a little bit of getting used to. The fact that it can check queries at compile time is its biggest strength.
I use a fork of sqlx in SQLPage [1]. I think my main complaint about it is runtime errors (or worse, values decoded as garbage) when decoding SQL values to the wrong rust type.
Love SQLx for my Rust projects. I would like to figure out a great way to use the compile time checks in python or js projects, but haven't explored it yet.
I find it kind of baffling that this toolkit is so popular when it makes handling database joins so difficult. After bashing my head against it for a while, I moved to Diesel, and while that has its own set of problems, I am generally able to get through them without resorting to horrible hacks or losing compile time checks.
What problems have you had with joins?
I have this comment in one of my projects:
```
It is required to mark left-joined columns in the query as nullable, otherwise SQLx expects them to not be null even though it is a left join. For more information, see the link below:
https://github.com/launchbadge/sqlx/issues/367#issuecomment-...
```
Did you have other problems beyond this, or are you referring to something different?
The issue above is a bit annoying but not enough that I'd switch to an ORM over it. I think SQLx overall is great.
It's a rust library that you can use to run sql queries against a database. It also inspects the database at compile* time to figure out the type of each column in your query so that your code is type-safe.
Just coming here to make a prediction: using raw SQL is not great for anything but very simple cases. You can make it type-safe, but that becomes tricky once things become dynamic.
But the real problem is ergonomy. The better solution in almost any language is to leverage the syntax of your language to allow for as much (non-macro) type-safety and auto-completion as possible.
For example instead of:
SELECT country, COUNT(*) as count
FROM users
GROUP BY country
WHERE organization = ?
[note that it matches SQL, not the language's collections function's names.]
As you see, that's also nice because now you can actually use variables easy - and even use pure rust to decide dynamically on things.
You can further increase typesafety if you want by doing things like `.from(table("users"))` and running extra checks on that table() part, similar to what the lib probably does. Also, sometimes you might have to make a compromise on your syntax and things like `"organization".=(yourVariable)` might have to be slightly rewritten.
Still, I think that people will rather end up with a library like I described, unless the SQL is very basic/static.
No my experience is the inverse. The type of library you describe is nice for the basic queries but once you start needing CTE, subquery, postgres json query, etc. it just because easier to manage it all in SQL directly.
I'm using JOOQ. Not saying that JOOQ is the greatest library, but all of what you just mentioned works in there without problem. Including CTEs and json stuff.
With a library such as SQLx, you can never really factor anything out. Or at least it's very hard and you lose the actual typesafety. I've been there and done that with doobie [https://typelevel.org/doobie/] which is basically the same in green.
I've been using SQL from backend languages for years, and I totally disagree. Using raw SQL is the only way for me. It's much easier to develop, tune, and debug in a SQL IDE. There's no need to translate back and forth between SQL and the language-specific DSL. With SQLx, I get all the type safety I really need.
Dynamically constructing queries is awkward, but most of mine ha very limited dynamic variation.
I get where you are coming from. But it's very easy to generate the SQL from the code. Then I take that, tune it in my SQL IDE against the DB and then adjust the code. Since the code is basically a 1:1 mapping to SQL (just with slightly different syntax) there isn't really a problem with that.
Once you have any kind of dynamic stuff (like a dynamic filter) you don't have any 100% pure SQL anymore anyways. If you don't have that, okay, this lib will be more convenient.
One thing I don't usually see addressed with the pure-sql approaches is how to handle dynamic query building. The most common example being large configurable forms that display a data grid. Kysely[1] does a good job of starting from this angle, but allowing something like specifying the concrete deserialization type similar to the libraries here.
I'm a big fan of sql in general (even if the syntax can be verbose, the declarative nature is usually pleasant and satisfying to use), but whenever dynamic nature creeps in it gets messy. Conditional joins/selects/where clauses, etc
How do folks that go all in on sql-first approaches handle this? Home-grown dynamic builders is what I've seen various places I've work implement in the past, but it's usually not built out as a full API and kind of just cobbled together. Eventually they just swap to an ORM to solve the issue.
* [1] https://kysely.dev
> dynamic query building
it's not (really) addressed by sqlx (intentionally), in the same way most ORM features are not addressed
but to some degree this is what is so nice about sqlx it mainly(1) provides the basic SQL functionality and then let you decide what to use on top of it (or if to use anything on top).
If you need more e.g. the sea-* ecosystem (sea-query, sea-orm) might fulfill you needs.
(1): It can compile time check "static" queries (i.e. only placeholders) which is a bit more then "basic" features, but some projects have to 99+% only static queries in which case this feature can move SQLx from "a building block for other sql libs" to "all you need" to keep dependencies thinner.
One approach is to create views for the required data and then just select the columns which are needed. The joins will be pruned by the query planner if they are not needed, so there is no need for conditional joins.
> The joins will be pruned by the query planner if they are not needed, so there is no need for conditional joins.
I always wondered about this. How reliable is that in your experience? Thank you in advance.
Not rust, but I've been a pretty big fan of Dapper and Dapper.SqlBuilder in the C# space... have used it with MS-SQL and PostgreSQL very effectively, even with really complex query construction against input options.
https://github.com/DapperLib/Dapper/blob/main/Dapper.SqlBuil...
I find that interpolating strings works pretty well for this use case (which actually switchd TO string interpolation from ORMs at a previous job of mine).
But this is conditional on either your database or your minimal abstraction layer having support for bindings arrays of data with a single placeholder (which is generally true for Postgres).
Is something like SeaQuery[0] what you're talking about?
[0] https://github.com/SeaQL/sea-query/
SeaQuery looks like a similar dynamic query builder for Rust as Kysely is for JS/TS, so yeah, that'd probably solve the dynamic query problem. But I think parent wasn't so much asking for another library but for patterns.
How do people who choose to use a no-dsl SQL library, like SQLx, handle dynamic queries? Especially with compile-time checking. The readme has this example:
But what if you have multiple possible where-conditions, let's say "WHERE organization = ?", "WHERE starts_with(first_name, ?)", "WHERE birth_date > ?", and you need to some combination of those (possibly also none of those) based on query parameters to the API. I think that's a pretty common use case.I agree with you that dynamic query building can be tedious with a pure SQL approach. The use case you are describing can be solved with something alone the lines of:
With SQLx you would have all the params to be Options and fill them according the parameters that were sent to your API.Does that make sense?
That's relying a lot on the DB engine, which will struggle as the condition gets more complex. I've had MySQL make stupid choices of query plans for very similar queries, I had to break the OR into UNIONs
I think the dynamic part is where the clauses themselves are optional. For example, say you have a data table that a user can filter rows using multiple columns. They can filter by just `first_name` or by `birth_date` or both at the same time using AND / OR, and so on. So you’re dynamically needing to add more or less “WHERE” clauses and then it gets tricky when you have to include placeholders like `$1` since you have to keep track of how many parameters your dynamic query is actually including.
I generally avoid DSLs as they don't bring much... except for this exact use-case. Dynamic queries is pretty much what a query builder is for: you can avoid a dependency by rolling your own, but well it's not trivial and people out there have built some decent ones.
So, if I have this use-case I'd reach for a query builder library. To answer the question of "how to do dynamic queries without a query builder library", I don't think there's any other answer than "make your own query builder"
> Especially with compile-time checking.
no compile time checking and integration tests
in general sqlx only provides the most minimal string based query building so you can easily run into annoying edge cases you forgot to test, so if your project needs that, libraries like sea-query or sea-orm are the way to go (through it's still viable, without just a bit annoying).
in general SQLx "compile time query checking" still needs a concrete query and a running db to check if the query is valid. It is not doing a rem-implementation of every dialects syntax, semantics and subtle edge cases etc. that just isn't practical as sql is too inconsistent in the edge cases, non standard extensions and even the theoretical standardized parts due to it costing money to read the standard and its updates being highly biased for MS/Oracle databases).
This means compile time query checking doesn't scale that well to dynamic queries, you basically would need to build and check every query you might dynamically create (or the subset you want to test) at which point you are in integration test territory (and you can do it with integration tests just fine).
besides the sqlx specific stuff AFIK some of the "tweaked sql syntax for better composeability" experiments are heading for SQL standardization which might make this way less of a pain in the long run but I don't remember the details at all, so uh, maybe not???
---
EDIT: Yes there is an sqlx "offline" mode which doesn't need a live db, it works by basically caching results from the online mode. It is very useful, but still no "independent/standalone" query analysis.
[dead]
I've been using sqlx with postgres for several months now on a production server with decent query volume all day long. It has been rock solid.
I find writing sql in rust with sqlx to be far fewer lines of code than the same in Go. This server was ported from Go and the end result was ~40% fewer lines of code, less memory usage and stable cpu/memory usage over time.
Speaking of Go, if you want compile-time type checking like what SQLx offers, the Go ecosystem has an option that is arguably even better at it:
https://sqlc.dev/
It has the advantage that it implements the parsing and type checking logic in pure Go, allowing it to import your migrations and infer the schema for type checking. With SQLx you need to have your database engine running at compile time during the proc macro execution with the schema already available. This makes SQLx kind of a non-starter for me, though I understand why nobody wants to do what sqlc does (it involves a lot of duplication that essentially reimplements database features.) (Somewhat ironically it's less useful for sqlc to do this since it runs as code generation outside the normal compilation and thus even if it did need a live database connection to do the code generation it would be less of an impact... But it's still nice for simplicity.)
Maintainer of sqlc here. Thanks for the kind words! I'm considering switching to the sqlx model of talking to a running database simply because trying to re-implement PostgreSQL internals has been a huge challenge. It works for most queries, but for the long tail of features, it's a losing battle.
Can you tell me why it's a non-starter for you?
It's possible to run sqlx in 'offline' mode that uses your schema to do the checks so you don't need a live database. That's a popular option in CI/CD scenarios.
It's absolutely core to SQLx. I'm surprised to hear that that isn't widely known based on the parent. The first time I used SQLx has to be 4 or 5 years ago and they had it back then.
Well, it hurts that it isn't the default. The README still tells you to set the environment variable, it just isn't the "default" way to do things. In my opinion it would be better to entirely remove support for connecting to the database during compilation. Does anyone actually want to use it that way?
Comparing and contrasting, sqlc type checking happens via code generation, basically the only option in Go since there's nothing remotely like proc macros. Even with code generation, sqlc doesn't default to requiring an actual running instance of the database, though you can use an actual database connection (presumably this is useful if you're doing something weird that sqlc's internal model doesn't support, but even using PostgreSQL-specific features I hadn't really ran into much of this.)
I think that, if a new user is going to encounter an error, it should be that SQLx couldn't talk to the database rather than that a mysterious file doesn't exist. They're going to need to connect to a dev database either way. They can learn about caching the schema information when they come to those later steps like building a CI pipeline. Early in a project, when your queries and schema are unstable, caching isn't going to be very useful anyway, since you'll be invalidating it constantly.
The sqlc authors are to be applauded for making a static analyzer, that is no small feat. But if you can get away with offloading SQL semantics to the same SQL implementation you plan to use, I think that's a steal. The usability hit is basically free - don't you want to connect to a dev database locally anyway to run end to end tests? It's great to eliminate type errors, but unless I'm missing something, neither SQLx nor sqlc will protect you from value errors (eg constraint violations).
1. I can't tell you how unconvinced I am with the error being less confusing. A good error message tells you what's wrong and ideally what to do to remedy it if possible... and to me there isn't really a practical difference between "set this environment variable" and "run this command". It seems like you basically add one extra step, but you prevent people from choosing a probably suboptimal workflow that they almost certainly don't want to use anyways... Either way, I don't think it's more confusing, and for someone new it's better to only have one way to do something, especially if it's the obviously superior thing anyways.
2. Sure, the database will probably be running locally, when you're working on database stuff. However, the trouble here is that while I almost definitely will have a local database running somehow, it is not necessarily going to be accessible from where the compiler would normally run. It might be in a VM or a Docker container where the database port isn't actually directly accessible. Plus, the state of the database schema in that environment is not guaranteed to match the code.
If I'm going to have something pull my database schema to do some code generation I'd greatly prefer it to be set up in such a way that I can easily wrap it so I can hermetically set up a database and run migrations from scratch so it's going to always match the code. It's not obvious what kinds of issues could be caused by a mismatch other than compilation errors, but personally I would prefer if it just wasn't possible.
The error message is a fair point, I do still think that making caching the default is premature.
I would definitely recommend writing a Compose file that applies your migrations to a fresh RDBMS and allows you to connect from the host device, regardless of what libraries you're using. Applying your migrations will vary by what tools you use, but the port forwarding is 2 simple lines. (Note that SQLx has a migration facility, but it's quite bare bones.)
This is not quite the same thing, because it requires `sqlx prepare` to be run first; and that talks to the database to get type information. In SQLC, on the other hand, query parsing and type inference is implemented from first principles, in pure Go.
sqlc's approach has its limitations. Its SQLite query parser is generated from an ANTLR grammar, and I've encountered situations where valid SQLite syntax was rejected by sqlc due to their parser failing.
Type inference was okay, since SQLite barely has any types. The bigger issue I had was dealing with migration files. The nice part about SQLx is that `cargo sqlx database setup` will run all necessary migrations, and no special tooling is necessary to manage migration files. sqlc, on the other hand, hard codes support for specific Go migration tools; each of the supported tools were either too opinionated for my use case or seemed unmaintained. SQLx has built-in tooling for migrations; it requires zero extra dependencies and satisfies my needs. Additionally, inferring types inside the actual database has its benefits: (1) no situations where subsets of valid query syntax are rejected, and (2) the DB may be used for actual schema validation.
For an example of why (2) may be better than sqlc's approach: databases like SQLite sometimes allow NULL primary keys; this gets reflected in SQLx when it validates inferred types against actual database schemas. When I last used sqlc, this potential footgun was never represented in the generated types. In SQLx, this footgun is documented in the type system whenever it can detect that SQLite allows silly things (like NULL primary keys when the PK satisfies certain conditions).
I believe sqlc can also connect to the database for type inference now too, fwiw.
Offline query caching is great. The team has made it work fantastically for workspace oriented monorepos too.
I ran sqlx / mysql on a 6M MAU Actix-Web website with 100kqps at peak with relatively complex transactions and queries. It was rock solid.
I'm currently using sqlx on the backend and on the desktop (Tauri with sqlite).
In my humble opinion, sqlx is the best, safest, most performant, and most Rustful way of writing SQL. The ORMs just aren't quite there.
I wish other Rust client libraries were as nice as sqlx. I consider sqlx to be one of Rust's essential crates.
You seem to know your stuff. What's your opinion of diesel?
Implementing the parsing and type checking logic in pure Go is not an unqualified advantage. As you point out, it means that SQLC "...essentially reimplements database features..." and in my experience, it does not reimplement all of them.
> with SQLx you need to have your database engine running at compile time during the proc macro execution with the schema already available.
FWIW, the compile-time query checking is entirely optional. If you don't use the query syntax checking then you don't need live database and you don't need `sqlx prepare`.
I never gelled with how SQLC needs to know about your schema via the schema file. I'm used to flyway where you can update the schema as long as it's versioned correctly such that running all the sets of flyways will produce the same db schema.
I referred go-jet since it introspects the database for it's code generation instead.
The way I prefer to use sqlc is in combination with a schema migration framework like goose. It actually is able to read the migration files and infer the schema directly without needing an actual database. This seems to work well in production.
That's how I'm using it as well (though I'm using some simple migration code instead of a framework): https://github.com/bbkane/enventory/tree/master/app/sqliteco...
I've been quite happy with this setup!
I spent 2 weeks trying to build a very basic rest crud API with SQLc and it was not better. I had to shift to SQLx because of how unintuitive SQLc was.
We've been running SQLC in production for a while now and I'm curious which part of it you found unintuitive? We run ours as a container service within the development environment that will compile your code from a postgres dump file. We've had no issues with it at all after the initial configuration guidelines for SQLC, though the documentation certainly isn't exactly great. Hell, I'm not sure I've ever worked with a better SQL to language tool in my decades so I'm surprised that it isn't working out for you.
That being said, as I understand it, SQLx does something very different. If you want dynamic queries, you'll basically have to build that module yourself. The power of SQLC is that anyone who can write SQL can work on the CRUD part of your Go backend, even if they don't know Go. Hell, we've even had some success with business domain experts who added CRUD functionality by using LLM's to generate SQL. (We do have a lot of safeguards around that, to make it less crazy than it sounds).
If you want fancy Linq, grapQL, Odata or even a lot of REST frameworks, you're not getting any of that with SQLC though, but that's typically not what you'd want from a Go backend in my experience. Might as well build it with C# or Java then.
It's quite simple really. I want to write a query and have a concrete object as it's return type. The framework that gets me there in the least amount of steps is going to be more intuitive.
Let's compare: SQLC - configuration file (yaml/json) - schema files - query files - understand the meta language in query file comments to generate code you want
SQLx - env: DATABASE_URL
Now does that mean that SQLx is the best possible database framework. No, it does not. Because I didn't spend my time doing things that weren't related to the exact queries I had to write I got more work done.
I want to appreciate the hard work the SQLx Devs have put in to push the bar for a decent SQL developer experience. People give them a really hard time for certain design decisions, pending features and bugs. I've seen multiple comments calling it's compile time query validation "gimmicky" and that's not nice at all. You can go to any other language and you won't find another framework that is as easy to get started with.
> SQLC - configuration file (yaml/json) - schema files - query files - understand the meta language in query file comments to generate code you want
I would recommend using pg_dump for your schema file which means it'll not be related to SQLC as such. This way it will be easier for you to maintain your DB, we use Goose as an example. In our setup part of the pipeline is that you write your Goose migration, and then there is an automated process which will update the DB running in your local dev DB container, do a pg_dump from that and then our dev container instance of SQLC will compile your schema for you.
The configuration file is centralized as well, so you don't have to worry about it.
I agree with you on the SQLC meta language on queries, I appreciate that it's there but we tend to avoid using it. I personally still consider the meta language a beter way of doing things than in-code SQL queries. This is a philosophical sort of thing of course, and I respect that not everyone agres with me on this. It's hard for me to comment on SQLx, however, as I haven't really used it.
What I like about SQLC is that it can be completely de-coupled from your Go code.
Maybe I'm drinking the sqlc Kool aid, but because I'm already using migration files, setting up the config to point to them and a folder of SQL queries was pretty painless.
And of course now that I have it, the incremental cost of adding a new query is really low as well
That's all understandable. But like I said I did spend 2 weeks working with SQLc, however when I compared it to just writing the query in my code, the developer experience was miles apart.
You could compare it to people writing CSS, JavaScript and Markup in separate files Vs having just one file in React/Svelte etc. which gives the user the option to combine everything into one.
There maybe a lot of drawbacks from the latter approach but it's makes everything a hell easier for people to just get started building.
Interesting - I've had the opposite experience. I usually prefer rust for personal projects, but when I recently tried to use SQLx with sqlite, lots of very basic patterns presented problems, and I wished I had sqlc back.
I love it's compile time query validation.
This is why I like using NodeJS or Python with SQL, it's very simple to have it not care about the return types. SQL is already statically typed per se, I don't need to re-assert everything. Achieving the same kind of automation in Go etc requires parsing the schema at compile-time like what you described, which is complicated.
imo sqlc from Go is supperior to sqlx from Rust. The other thing is that sqlx is somehow slow, when I did some test, pgx ( Go ) was faster than sqlx.
sqlx pulls in `syn`. Syn is really slow to compile.
How is it more LoC in Go, just cause of the "if err" stuff?
Go's verbose error handling certainly impacted the vertical height of files (lots of early returns), but wasn't a big contributor to overall LoC.
The more serious LoC offenders in Go were:
1. Marshalling/Unmarshalling code (for API responses, to/from external services, etc). In general, working with JSON in Go was painful and error prone. Rust's serde made this a complete non-issue.
2. Repetitive sql query code (query, scan for results, custom driver code for jsonb column marshalling/unmarshalling). Rust's sqlx made this a non-issue.
3. Go's use of context to share data through handlers was a real pain and error prone (type casting, nil checks, etc). Rust's actix-web made this a real beautiful thing to work with. Need a "User" in your handler? Just put it as an argument to the handler and it's only called if it's available. Need a db connection? Just put it as an argument to the handler.
4. Go's HTML/Text templates required more data to be passed in and also required more safety checks. Rust's askama was overall more pleasant to use and provided more confidence when changing templates. In Rust, I'd catch errors at compile time. In Go, I'd catch them at runtime (or, a user would).
I must admit I was surprised. I thought Rust would have been more lines of code because it's a lower level language, but it ended up being ~40% less code. My general sentiment around working with the code is very different as well.
In the Rust codebase I have no hesitation to change things. I am confident the compiler will tell me when I'm breaking something. I never had that confidence in Go.
Hm. I've used Rust a lot more than Go, so this is secondhand to me. I know that generics are iffy and nullness is annoying. If you're paying for static types in Go and still not getting the guarantees, that really bites.
I have used this as well as many of the other lower-level db drivers (which don't check your SQL at compile time) and I can say I much prefer the latter.
My issues with SQLx when I first tried it were that it was really awkward (nigh impossible) to abstract away the underlying DB backend, I expect those issues are fixed now but for some simple apps it's nice to be able to start with SQLite and then switch out with postgres.
Then I wanted to dockerize an SQLx app at one point and it all becomes a hassle as you need postgres running at compile time and trying to integrate with docker compose was a real chore.
Now I don't use SQLx at all. I recommend other libraries like sqlite[1] or postgres[2] instead.
SQLx is a nice idea but too cumbersome in my experience.
[1]: https://docs.rs/sqlite/latest/sqlite/ [2]: https://docs.rs/postgres/latest/postgres/
I'm have no experience with abstracting away the backend, but Dockerizing is actually pretty easy now - there's an offline mode[1] where you can have sqlx generate some files which let it work when there's no DB running.
[1]: https://docs.rs/sqlx/latest/sqlx/macro.query.html#offline-mo...
It's definitely not perfect, but I think both of those issues are better now, if not fully solved.
For needing a DB at compile time, there's an option to have it produce artefacts on demand that replace the DB, although you'll need to connect to a DB again each time your queries change. Even that is all optional though, if you want it to compile time check your queries.
I know it's annoying (and apparently there is a solution for generating the required files before the build), but in these kinds of situations Go and Rust are great for doing a static build on the system and then copying into a scratch image.
Versus Python and Node often needing to properly link with the system they'll actually be running in.
Why would you want to abstract away the underlying database? Wouldn't it better to already use the target DB to cattch potential issues earlier? Also to avoid creating another layer of indirection, potentially complecting the codebase and reducing performance?
> Wouldn't it better to already use the target DB to cattch potential issues earlier?
The target DB can change as a project goes from something mildly fun to tinker with to something you think might actually be useful.
Also I personally find that SQLite is just nice to work with. No containers or extra programs, it just does what you ask it to, when you ask it to
Primarily for libraries and deployment environments that aren't fully in your control which is still pretty common once you get to B2B interactions, SaaS is not something you can easily sell to certain environments. Depending on the assurance you need, you might even need to mock out the database entirely to test certain classes of database errors being recoverable or fail in a consistent state.
Even in SaaS systems, once you get large enough with a large enough test suite you'll be wanting to tier those tests starting with a lowest common denominator (sqlite) that doesn't incur network latency before getting into the serious integration tests.
Thanks, interesting experience - so much depends on getting developer ergonomics right. There is something to be said for checking the SQL at compile-time, though - esp. if trying to ORM to a typesafe language.
How long ago did you try SQLx? Not necessarily promoting SQLX, but the `query_as` which lets one make queries without the live database macro has been around for 5 years [1].
For lower level libraries there is also the more downloaded SQLite library, rusqlite [2] who is also the maintainer of libsqlite3-sys which is what the sqlite library wraps.
The most pleasant ORM experience, when you want one, IMO is the SeaQl ecosystem [3] (which also has a nice migrations library), since it uses derive macros. Even with an ORM I don't try to make databases swappable via the ORM so I can support database-specific enhancements.
The most Rust-like in an idealist sense is Diesel, but its well-defined path is to use a live database to generate Rust code that uses macros to then define the schema-defining types which are used in the row structs type/member checking. If the auto-detect does not work, then you have to use its patch_file system that can't be maintained automatically just through Cargo [4] (I wrote a Makefile scheme for myself). You most likely will have to use the patch_file if you want to use the chrono::DateTime<chrono::Utc> for timestamps with time zones, e.g., Timestamp -> Timestamptz for postgres. And if you do anything advanced like multiple schemas, you may be out of luck [5]. And it may not be the best library for you if want large denormalized tables [6] because compile times, and because a database that is not normalized [7], is considered an anti-pattern by project.
If you are just starting out with Rust, I'd recommend checking out SeaQl. And then if you can benchmark that you need faster performance, swap out for one of the lower level libraries for the affected methods/services.
[1] https://github.com/launchbadge/sqlx/commit/47f3d77e599043bc2...
[2] https://crates.io/crates/rusqlite
[3] https://www.sea-ql.org/SeaORM/
[4] https://github.com/diesel-rs/diesel/issues/2078
[5] https://github.com/diesel-rs/diesel/issues/1728
[6] https://github.com/diesel-rs/diesel/discussions/4160
[7] https://en.wikipedia.org/wiki/Database_normalization
Quite similar to manifold-sql[1], which is arguably better integrated into Java than SQLx is into Rust. Inline native SQL in Java is *inherently type-safe*, no mapping -- query types, query results, query parameters all projected types at compile-time.
1. https://github.com/manifold-systems/manifold/blob/master/man...sqlx is my favorite way of working with databases in Rust hands down.
I've tried alternatives like Diesel and sea-orm. To be honest, I feel like full-blown ORMs really aren't a very good experience in Rust. They work great for dynamic languages in a lot of cases, but trying to tie in a DB schema into Rust's type system often creates a ton of issues once you try to do anything more than a basic query.
It's got a nice little migration system too with sqlx-cli which is solid.
I’ve used Diesel for a bit now but haven’t had issues wrangling the type system. Can you give an example of an issue you’ve encountered?
This has been exactly my experience! I've found SQLx to be a joy to work with in Rust!
Same. Never again diesel. The type system just turns it into madness. Sqlx is a much more natural fit.
SQLx is great, but I really wish they had a non-async interface. I had to switch a project from sqlx to rusqlite seemingly just due to the overhead of the async machinery. Saw a 20x latency reduction that I narrowed down to "probably async" (sort of hard to tell, I find it very difficult to do perf analysis of async code). I try to avoid discussing async so as to not come off as a frothing-at-the-mouth-chest-thumping-luddite but honestly, if sqlx had a non-async interface I'd be very happy to accept the "you don't need to use it" argument. its the only place where I don't feel like I really have a choice.
Async does not incur 20x slowdowns when you're I/O bound. It would be ridiculous for copying a few bytes to be slower than a syscall. This sounds like mutex issues, or WAL config, or something like that.
I used SQLx with an SQLite database and ran into connection pool problems that would cause the database to be unexpectedly dropped.
The issues I saw seem to be related to these issues:
https://github.com/launchbadge/sqlx/issues/3080
https://github.com/launchbadge/sqlx/issues/2510
The problems did not manifest until the application was under load with multiple concurrent sessions.
Troubleshooting the issue by changing the connection pool parameters did not seem to help.
I ended up refactoring the application's data layer to use a NoSQL approach to work around the issue.
I really like the idea of SQLx and appreciate the efforts of the SQLx developers, but I would advise caution if you plan to use SQLx with SQLite.
SQLx is great, but I had a long laundry list of issues with its SQLite support so I forked it into a focused SQLite-specific library. It has now diverged very far from SQLx, and the number of small inaccuracies and issues we fixed in the low-level SQLite bindings is well into the dozens. The library is unannounced, but is already being used in some high-throughput scenarios.
https://github.com/cortesi/musq
Musq looks very friendly. I will try it in a future project.
Thank you for sharing it!
As a total outsider to sqlx, those issues don’t surprise me: any application on any platform that uses a SQLite in-memory DB concurrently is likely to violate many assumptions made by client-side connection pooling tools. In-memory SQLite is a great tool, but using it indirectly behind a connection pooler that assumes the database is external to the current process is bound to cause problems.
Agree with zbentley, I actually wouldn't expect this to work well - perhaps a good thing for sqlx team to warn against.
SQLx and F# type-providers are probably the best developer experience for writing database access code. I wish more languages had something equivalent.
I think this sort of stuff only comes after a LOT of experience with building SQL db backed systems - it resonated with me immediately. (I'm the OP but not affiliated with this Rust project at all).
I am not much into Rust ATM. I am quite comfortable with C++. So here it goes my question:
I use sqlpp11 in C++.
I generate code and I can use it with strong typing by including some headers. This Rust crate seems to provide compile-time checking.
But it will give me code-completion? It is very nice that by pressing '.' you know what you potentially have.
It depends. On RustRover you do, because the query text can be language-injected as SQL, and it uses your configured schema.
I first went to sqlx thinking it would be like JOOQ for Rust, but that wasn't the case. It's a pretty low-level library and didn't really abstract away the underlying DBs much, not to mention issues with type conversions. We've since just used rust-postgres.
I have never used SQLx, but the best SQL integration I can think of is LINQ. How does this compare to that?
In the dotnet world, SQLx is more analogous to F# type providers like FSharp.Data.SqlClient , SQLProvider or Rezoom.SQL.
Different products. I would not compare them. LINQ is more like Diesel (https://diesel.rs/)
To expand, SQLx isn't an ORM or query builder, what it does is allow you to write raw SQL with compile-time guarantees of type safety. It does this by connecting to a dev database at compile time & uses SQL's introspection features (specifically, by preparing a statement[1]) to analyze your queries. (It can also cache this information to check without a database available, and has a basic migration facility.)
[1] https://github.com/launchbadge/sqlx/blob/main/FAQ.md#how-do-...
I've used SQLx for a couple of projects (MariaDB and SQLite) its good, it does the thing though it takes a little bit of getting used to. The fact that it can check queries at compile time is its biggest strength.
I use a fork of sqlx in SQLPage [1]. I think my main complaint about it is runtime errors (or worse, values decoded as garbage) when decoding SQL values to the wrong rust type.
* [1] https://sql-page.com/
Love SQLx for my Rust projects. I would like to figure out a great way to use the compile time checks in python or js projects, but haven't explored it yet.
I find it kind of baffling that this toolkit is so popular when it makes handling database joins so difficult. After bashing my head against it for a while, I moved to Diesel, and while that has its own set of problems, I am generally able to get through them without resorting to horrible hacks or losing compile time checks.
What do you mean? It takes SQL queries. You use the `JOIN` keyword in the SQL to do joins.
What problems have you had with joins? I have this comment in one of my projects:
``` It is required to mark left-joined columns in the query as nullable, otherwise SQLx expects them to not be null even though it is a left join. For more information, see the link below: https://github.com/launchbadge/sqlx/issues/367#issuecomment-... ```
Did you have other problems beyond this, or are you referring to something different?
The issue above is a bit annoying but not enough that I'd switch to an ORM over it. I think SQLx overall is great.
If anyone is looking for a similar thing in F#, complete with automatic type generation etc, have a look at SqlHydra.
I've read the GitHub page but I still have no idea what problems it solves or why I should care about this library?
It's a rust library that you can use to run sql queries against a database. It also inspects the database at compile* time to figure out the type of each column in your query so that your code is type-safe.
* Or in your editor as you're writing code.
Just coming here to make a prediction: using raw SQL is not great for anything but very simple cases. You can make it type-safe, but that becomes tricky once things become dynamic.
But the real problem is ergonomy. The better solution in almost any language is to leverage the syntax of your language to allow for as much (non-macro) type-safety and auto-completion as possible.
For example instead of:
That should be [note that it matches SQL, not the language's collections function's names.]As you see, that's also nice because now you can actually use variables easy - and even use pure rust to decide dynamically on things.
You can further increase typesafety if you want by doing things like `.from(table("users"))` and running extra checks on that table() part, similar to what the lib probably does. Also, sometimes you might have to make a compromise on your syntax and things like `"organization".=(yourVariable)` might have to be slightly rewritten.
Still, I think that people will rather end up with a library like I described, unless the SQL is very basic/static.
No my experience is the inverse. The type of library you describe is nice for the basic queries but once you start needing CTE, subquery, postgres json query, etc. it just because easier to manage it all in SQL directly.
I'm using JOOQ. Not saying that JOOQ is the greatest library, but all of what you just mentioned works in there without problem. Including CTEs and json stuff.
With a library such as SQLx, you can never really factor anything out. Or at least it's very hard and you lose the actual typesafety. I've been there and done that with doobie [https://typelevel.org/doobie/] which is basically the same in green.
I agree and prefer Diesel to SQLx, but I do use both.
I've been using SQL from backend languages for years, and I totally disagree. Using raw SQL is the only way for me. It's much easier to develop, tune, and debug in a SQL IDE. There's no need to translate back and forth between SQL and the language-specific DSL. With SQLx, I get all the type safety I really need.
Dynamically constructing queries is awkward, but most of mine ha very limited dynamic variation.
I get where you are coming from. But it's very easy to generate the SQL from the code. Then I take that, tune it in my SQL IDE against the DB and then adjust the code. Since the code is basically a 1:1 mapping to SQL (just with slightly different syntax) there isn't really a problem with that.
Once you have any kind of dynamic stuff (like a dynamic filter) you don't have any 100% pure SQL anymore anyways. If you don't have that, okay, this lib will be more convenient.