Is the daemon still considered beta? I am just asking because I am seeing a bunch of different articles and posts about how to setup the daemon, but there is no official documentation on how to set it up. Unless I missed it.
There are also a bunch of unanswered questions that could fit into a daemon FAQ.
do I have to run x daemons for x users? on a linux box with 20 users, do I then have 20 daemon processes and 20 sockets?
how do I setup the demon on macOS without having to manually start the daemon in a terminal after login (I know there a launchd services, but unfortunately Apple loves to break them quite frequently.)
(if first one is a “yes”) is there a daemon planned that accepts connections from multiple atuin client (user) processes?
The daemon is basically intended as a workaround for those using ZFS. For those not using ZFS, there is little reason to install it at present, and it’s likely to complicate things for the average user.
We have a lot of users who struggle to ensure they’ve setup the shell plugin correctly, so also having setup steps for a daemon will increase friction there.
The daemon will only ever be single user, Atuin was never designed to be a multi user process and unless there’s a significant number of users who wish to run on ZFS and with multi user, it’s not likely to be worth the re-architecture.
launchd would be the main way, but as you say, Apple breaks it a lot. There are a bunch of different ways you could do it with a bunch of different services, but that’s up to the end user to decide. There’s not a huge benefit to using the daemon on macos, in practice.
We could use something like daemonize, but I don’t think it’s worth the increase in complexity.
Most of the questions you’ve see about the daemon have been because 18.4 unintentionally broke something with Home Manager - lots of nix people use home manager + zfs, which means lots of daemon usage.
In addition, if ZFS ever fixes its issues with SQLite, we will have little reason to continue maintaining the daemon.
I am using ZFS only on my Proxmox node(s), but the VMs don’t use ZFS. I thought that the daemon was also a way of having a timed sync (and thus superior), instead of checking how much time has passed when I run a command and then sync.
With the current non-daemon architecture (unless you choose to sync after every command), it’s still possible that commands are not available to other hosts.
That’s why I thought that the superior sync model would be the future. I didn’t know that it was basically just seen as a workaround.
The initial motivation for implementing the daemon was to avoid constantly opening and closing the sqlite db, which can interact badly with zfs commit scheduling and cause user-visible delays for the shell prompt. Holding the db open in a separate process avoids this.
Once that mechanism exists, there are other opportunities: various work that can be moved to the daemon and taken out of the inline shell prompt. Sync was a necessary one, because the daemon has the db open, and then it’s a small additional step to allow async sync (heh).
Would that, by itself, be enough to justify maintaining the rest of the daemon complexity (as noted, not just internal implementation, but also end-user config management)? It seems doubtful, and this is at least part of the reason behind calling it experimental: in addition to a significant bit of new implementation that was unproven, the background sync feature might go away. This leads to the question you’re asking, as a user primarily interested in that feature.
There are also other opportunities, and at least hypothetically a collection of those might make the daemon architecture worthwhile even if one day the zfs issue is resolved. Wild and totally unfounded speculation of such potential additions might include:
a change to the server protocol, to avoid regular polling in case the server has updates from an otherwise idle daemon, and use a push-style update or notification instead
some kind of peer-to-peer discovery and sync, between devices on the same local network, for instant sync and reduced external traffic / connectivity requirements, maybe even wider for serverless operation more like syncthing.
daemon socket forwarding via ssh, to facilitate minimal state and setup on remote machines or more ephemeral setups like containers
other things ellie is maybe cooking up I have no idea about?
Each of these would involve substantial changes elsewhere, as well as a commitment to the daemon-based architecture that come with some additional work to make setup and starting easier. Some obvious ideas for those have also been discussed, but again that work is only worthwhile if it supports significant and committed features.
Exactly this. The extra functionality it would enable is cool, but realistically not something I have seen very much demand for. At least, not enough to justify a significant restructuring + much more complex setup and operation.
Realtime/push streaming from the server would also be cool, but likely to make self hosting setups harder. The server is intentionally “just” a HTTP API with a database. We could end up with websockets, if it’s a thing that enough people want.
In the end, I’d like to keep it as experimental while ZFS (and btrfs?) have SQLite performance issues. Once (if?) these are resolved, if there’s not clear demand for daemon-supported features, it will likely be deprecated.
It’s because I have automatical snapshot creation and deletion in the background. When that happens, fsync maybe delayed, and applications using SQLite3 like atuin and Firefox are affected.
Maybe the delay is not before a shell prompt, but a command execution. I don’t encounter this since I start using the daemon, and my memory faded.
I only run the daemon for proxmox and truenas shells. I have a script here that should give you a good starting point. Put it in one of the startup scripts. I haven’t seen the need for the daemon on mac or linux.
Rob