My experience with SurrealDB starting with v0.3 in February 2023, all the way up to v3 in 2026
This is a follow-up post to an article I wrote in 2023, which you can read by clicking on the banner above.
Motivations
I have been using SurrealDB for my personal blog/open source forum platform since February 2023, well before v1 was fully released, so I have a pretty good understanding of the database now and how it has evolved over the last few years. I have primarily used v1, up until last month, when I decided it was time to upgrade to v3.
The main draw for me was the stable release of the embedded node.js runtime support in February of 2026. This was the feature I had really been waiting for to make my project as portable and easy to setup as possible. Before this, you had to manually setup a remote SurrealDB instance and connect to it; which isn't hard, but the goal is simplicity and ease of use right? So that was my major motivation for updating, as well as some lingering performance issues with the early v1 releases that I had never gotten around to investigating.
Background
First, I would like to explain some background about the project; what I liked about about SurrealDB initially, and why I choose it over MongoDB when I first started working on this project more seriously again in 2023.
When I took my relational databases course in university, I pretty much hated everything about it. So when I took a web development course in my final year and one of the major assessments for the class was a semester long project; I chose MongoDB initially for the backend, as it was a popular NoSQL database at the time and didn't seem too difficult to learn. It was completely fine at the time, and for just a school project, but eventually things got cumbersome and unwieldy, and I lost interest in continuing with it after the course was over.
Fast forward to when I got my first job as a web developer. I joined a "startup" of 4 people (with only 2 developers, myself included), that created an ERP-as-a-service software. I say "startup" like that because at the time when I joined, their software was pushing 10 years old and was still stuck on .NET Framework 4.7.2. I was thrust into world of legacy enterprise SQL with stored procedures hundreds of lines long, and bloated classes thousands of lines long. I was essentially fighting for my life at all times to understand what the hell was going on. The point of this I suppose, is to say that I learned to appreciate what SQL had to offer, and I actually quite liked it this time around.
Okay, so getting to adopting SurrealDB. I had seen some videos on Youtube about the early demos, proof of concept; social media hype about what it could be, and what it was capable of, etc. What really drew me into it though was the SQL-like syntax, while also having first-class support for complex data structures like nested arrays and even full objects. As well as their concept of record IDs and how they permeated though the architecture. The limitation of the classic SQL single flat row paradigm was one of the major detractors I rubbed up against during my time actually learning it in my first job. Having separate tables for every one:many or many:many relationship was frustrating for me.
I decided I was going to play around with SurrealDB and test it out, so I installed it on my laptop and tried to see if it was a viable alternative to MongoDB for what I wanted. See if I could use it to replace the backend of the old web development project I worked on a year ago. You can read all about my experience with that here in my previous post about it.
Staying on v1
So a few years go by after completing the migration to SurrealDB (which I was very pleased with), and I kind of lost interest in the project. I had used it primarily as a personal blog, and added a few features here and there over time, when I felt like something was missing. I didn't have any issues with v1 (aside from some performance issues, which weren't really that problematic for what I was doing), so I just stayed on v1 for awhile.
Then we arrive in 2026. My ambitions to have the project become a highly successful commercial product were long gone, and I didn't know what to do with it anymore. I didn't want to shelve the project that I had put thousands of hours into, so I put my big boy pants back on and started working on it again.
Upgrading to v2 and v3
The release of the embedded runtime in node.js was really the thing that pushed my over the edge to start working on the project again. I updated all my dependencies and got to work. I think I spent 30-40 hours over the course of 3-4 days straight, just pushing though and getting my existing database schema updated to work with the changes introduced in both v2 and v3 all at once. I had already done the transition from beta to v1 in the fall of 2023 when it released, so I had a pretty good idea of what I was getting myself into. Overall, I think going to v2 and then v3 were both significantly smoother than getting to v1 from the beta.
The guides from the docs were straightforward and well documented. They explained all of the breaking changes and syntax differences that needed to be updated. They even released some tooling to help automate some of that process. Which I found to be helpful, but still not entirely seamless. There was still a significant amount of manual migration required.
The changes required going from v1 to v2 were not super impactful, but going from v2 to v3 had a lot more things to consider, however I wasn't too heavily impacted overall. The following items were the most notable things I encountered during my upgrade.
Deprecation of futures: My biggest refactoring was regarding the removal of the future type in favor of COMPUTED fields. This made things more restrictive but also arguably also better in the long term to avoid questionable design patterns. It was annoying though, as after I exported my existing database using the provided tooling, I had to go in manually and search and replace any usages of futures in either default fields, or dynamically set fields on records.
Record ID parsing: Another major issue I encountered was the dropped support for automatically parsing strings as record IDs. This forced me to abandon the CONTENT syntax when creating new entries, and forced me to specify each field explicitly and parse the record IDs as a new RecordID() from the client library. I wasn't a big fan of this, as it made my experience less fluid in the backend, but ultimately it was okay and I adjusted. I completely overhauled my query building helpers to add automatic parsing of different parameters types such as record IDs or arrays of record IDs.
At the end of that multi-day hyperfixation, I had an updated and functional project which was compatible with the newest release of SurrealDB, and I was ready to start using some of the new tools available to me.
What Changed
Ultimately not a lot, and I think that is a good thing. I didn't really need to redesign the way that my backend worked and the core features of SurrealDB continued to shine through. I feel very pleased with my decision to adopt it early on, and I am glad that I stuck with it.
I do want to revisit each of the examples I presented in my original post, and compare and contrast to how things used to work and how they do now. I will mention areas where my design philosophy changed, or where I needed to update my workflow to adjust for changes presented in v2 or v3.
Creating a post
CREATE post SET
user = $user,
title = $title,
content = $content,
replyTo = $replyTo ?? NONE,
topics = $topics,
images = $images,
votes.positive = [$user];
-- from my schema definition file
DEFINE EVENT OVERWRITE updateTopics ON post WHEN $event = 'CREATE' THEN {
FOR $topic in $after.topics {
IF !record::exists($topic) {
CREATE $topic SET posts = [$after.id];
}
ELSE {
UPDATE $topic SET posts += $after.id;
}
};
};
- I had to remove any usages of
CREATE ... CONTENT $valuebecause record IDs are no longer automatically parsed from strings. This meant that I couldn't just supply a parsed JSON object which might contain a record ID, as all of$user,$replyToand$topicswould have been previously. - The event is defined on the table itself in my schema definition file. The event is defined to only run when a new post is created, where it will then create or update any of the topics used on the post to add the ID to array on that topic.
- While I could replace this event with the new
REFERENCEsyntax on the table definition, it unfortunately doesn't work when applying it to a table retroactively. So all of my existing post:topic references would break, and only new posts would setup the reference correctly. I am hoping this is something that the SurrealDB team can address at some point in the future, as it would be a huge step in simplifying the logic and reducing the chance of having dangling references to deleted entries.
Getting submissions for the feed
SELECT id, user.id, user.name, title, time, edited, timeEdited,
replyTo.id, replyTo.title, topics, comments, votes, score,
archived, images, visits
FROM post
WHERE archived != true
ORDER BY time DESC
FETCH user, replyTo, images;
- Not much has changed here, and I still can easily pull data from fields which are records such as
user,replyToandimages. - I do have some thoughts about how
FETCHworks, since you can't combine it withOMITon a fetched field. This could be problematic if there is any sensitive data stored in the fetched record, like a hashed password on a user. This forces you to specify every field you need from a fetched record, even if you only want to omit a single one. - You can also avoid this by implementing permissions on tables or even individual fields, but that doesn't really work for my application, as all queries are done by the backend server instance which always has full authority.
Casting votes
UPDATE $item SET
votes.positive = array::union(votes.positive, [$user]),
votes.misleading -= $user,
votes.negative -= $user;
- Nothing has changed here, just a single endpoint that updates anything that is "votable" easily by using the record ID itself as the target in the
UPDATE.
Calculating scores
DEFINE FIELD OVERWRITE score ON comment
COMPUTED (votes.positive.len() - (<float>votes.misleading.len()/2) - votes.negative.len());
- Here is an example of where I had to migrate from away from
future. The underlying calculations didn't change at all, they just got a slightly different wrapper.
Following and unfollowing
RETURN {
UPDATE $user SET
following = array::union(following, [$follower]);
UPDATE $follower SET
followers = array::union(followers, [$user]);
};
- Same underlying query, but now wrapped in a
RETURNblock, which acts as a transaction. So if anything fails inside the block, then all of the changes will be rolled back.
Sending email confirmations
CREATE accountConfirmation SET
account = $account;
-- from my schema definition file
DEFINE TABLE OVERWRITE accountConfirmation SCHEMAFULL;
DEFINE FIELD OVERWRITE account ON accountConfirmation
TYPE record<account>;
DEFINE FIELD OVERWRITE time ON accountConfirmation
TYPE datetime DEFAULT ALWAYS time::now();
DEFINE FIELD OVERWRITE used ON accountConfirmation
TYPE bool DEFAULT ALWAYS false;
DEFINE FIELD OVERWRITE expired ON accountConfirmation
COMPUTED time::now() > time + 15m;
- I first started out with most of my tables being
SCHEMALESS, allowing for any fields to included on an entry, no restrictions. - Now I have moved fully to
SCHEMAFULLwith explicitDEFINE FIELDstatements for every table. I like there is the flexibility of having both options available, allowing you to pick and choose which approach you want to go with. - There is also the option of doing
TYPE object FLEXIBLEwhen defining a field, which allows you to have a nested object with any values inside, this is good for stuff like metadata or user defined data that could be different for each entry.
Fetching all submissions for a specific topic
SELECT id, user.id, user.name, title, time, edited, timeEdited,
replyTo.id, replyTo.title, topics, comments, votes, score,
archived, images, visits
FROM post
WHERE topics CONTAINS $topic
AND archived != true
ORDER BY time DESC
FETCH user, replyTo, images
SELECT id, user.id, user.name, content, time,
topics, quote, replies, votes, score,
edited, timeEdited, visits, images
FROM thread
WHERE topics CONTAINS $topic
AND replyTo = NONE
ORDER BY time DESC
FETCH user, quote, quote.user, images
- I have since added the concept of a thread, similar to what you might find on Twitter or Bluesky, so now I have to fetch both so they can be displayed on the topic's feed.
- They make use of the
CONTAINSoperator to check an array for a specific value. This makes arrays of record IDs super handy for filtering and defining relationships between different tables.
Query builder setup in JavaScript
return await new DatabaseQuery()
.addSql(`
SELECT id, name, roles, traits, dateJoined, votes, score
FROM user
WHERE topics CONTAINS $topic
ORDER BY score
`)
.addRecord("topic", `topic:${topic}`)
.queryAll<User>()
- I have revised my query setup significantly to prioritize single blocks of SQL. It improves the readability for me and forces me to design queries in a more cohesive way without conditional branches.
- The features of SurrealDB makes this pretty simple, with
IF ELSEbranches, parameters, record traversal, and a slew of built in functions for all the supported data types. You can see some examples of this in the section below.
Advanced Patterns
I'd now like to cover some new areas where I feel like I have been able to really take advantage of the features of SurrealDB to implement powerful yet concise queries that have a relatively simple construction. It takes some time to start thinking about these problems differently. How to structure data differently and how to relate tables and fields to make these new patterns possible.
Approving moderation requests
LET $request = (
UPDATE ONLY $request SET
approvals += $user
);
IF array::len($request.approvals) >= $threshold {
UPDATE $request.topic SET
moderators += $request.user;
IF $request.user.roles CONTAINSNOT "moderator" {
UPDATE $request.user SET
roles += "moderator";
};
UPDATE $request SET
closed = true;
};
-- from my schema definition file
DEFINE TABLE OVERWRITE moderationRequest SCHEMAFULL;
DEFINE FIELD OVERWRITE user ON moderationRequest
TYPE record<user>;
DEFINE FIELD OVERWRITE topic ON moderationRequest
TYPE record<topic>;
DEFINE INDEX OVERWRITE user_topic ON moderationRequest
FIELDS user, topic UNIQUE;
DEFINE FIELD OVERWRITE approvals ON moderationRequest
TYPE array<record<user>> DEFAULT ALWAYS [];
DEFINE FIELD OVERWRITE denials ON moderationRequest
TYPE array<record<user>> DEFAULT ALWAYS [];
DEFINE FIELD OVERWRITE closed ON moderationRequest
TYPE bool DEFAULT ALWAYS false;
- There is a lot happening here; updating the request and appending the user ID of the approver to an array, checking if the number of approving users has reached the threshold, updating the requested topic with the new moderator's ID, giving the requesting user the moderator role if they aren't one already, and then closing the request.
- All of this requires passing in only three parameters: the ID of the request, the ID of the user who is approving, and the required threshold number.
- Using the capabilities of record IDs allow traversal of related tables using dot notation. The
moderationRequesttable only stores the user ID and topic ID directly. But since they are record IDs, then it's as simple as$request.user.rolesto get a nested field from another table. No need to orchestrate additional queries and/or joins to get this information beforehand. - I feel like this example really showcases the power of SurrealDB it a way that I never imagined when I first started using it.
Deleting a post
IF $post.user != $user.id AND $user.roles CONTAINSNOT "admin" {
THROW "You are not allowed to do this.";
};
FOR $topic IN $post.topics {
UPDATE $topic SET
posts -= $post;
};
FOR $image IN $post.images {
UPDATE $post.user SET
tokens += $image.tokens;
DELETE $image;
};
IF array::len($post.comments) > 0 {
UPDATE $post SET
content = "[deleted]",
deleted = true;
}
ELSE {
DELETE $post;
};
- Another powerful query that runs entirely inside a single database transaction. No additional queries are required to join any tables to get the requisite information.
- Only two parameters are supplied; the ID of the post, and the ID of the user trying to delete it.
- It checks to see if the user is the author of the post, or if they are an admin; and if not then they shown the message thrown in a notification. It removes the post from any topics it was added to. It deletes any internal images which were uploaded for the post, and refunds the token balance to the user. Then depending on if anyone commented on the post, it either completely deletes the post, or removes just the content so that any discussion generated on the post doesn't vanish into limbo.
Conclusion
I feel like SurrealDB is a real contender for one of the best new database paradigms out there right now. Yes, it is still young and has some issues; but they are continually releasing updates, constantly improving things, and regularly fixing bugs and issues. I have had a great experience using it.
Should you consider SurrealDB for you next personal project or new service? I think yes. There is a lot of value to be gained here, especially when it comes to modelling complex relationships with potentially variable shapes and sizes. Should you replace your existing SQL or NoSQL backend with SurrealDB right now? No, probably not. It doesn't make sense to replace something perfectly serviceable, for a technology that still needd to prove itself in the long run.
I am also only covering a small portion of what SurrealDB is capable of. Curious? Download it yourself right now and give it a try. You can easily install it anywhere and host it yourself on a small linux box. That is what I do. I haven't paid a dime for the fully managed cloud hosting offering that they provide.
What do you think? Have you used SurrealDB before, or are you using it right now in a production application? How doesn't your impression of SurrealDB differ from mine? Let me know in the comments below.