hydrus is cpu and hdd hungry

The hydrus client manages a lot of complicated data and gives you a lot of power over it. To add millions of files and tags to its database, and then to perform difficult searches over that information, it needs to use a lot of CPU time and hard drive time--sometimes in small laggy blips, and occasionally in big 100% CPU chunks. I don't put training wheels or limiters on the software either, so if you search for 300,000 files, the client will try to fetch that many.

In general, the client works best on snappy computers with low-latency hard drives where it does not have to constantly compete with other CPU- or HDD- heavy programs. Running hydrus on your games computer is no problem at all, but you should have it set to not start a big job while your CPU is otherwise busy so your games can run freely. Similarly, if you run two clients on the same computer, you should have them set to work at different times, because if they both try to process 500,000 tags at once on the same hard drive, they will each slow to a crawl.

Keeping your HDDs defragged is very important, and good practise for all your programs anyway. Make sure you know what this is and that you do it. I use PerfectDisk. O&O Defrag is also good.

maintenance and processing

I have attempted to offload most of the background maintenance of the client (which typically means repository processing and internal database defragging) to time when you are not using the client. This can either be 'idle time' or 'shutdown time'. The calculations for what these exactly mean are customisable in file->options->maintenance and processing.

If you run a quick computer, you likely don't have to change any of these options. Repositories will synchronise and the database will stay fairly optimal without you even noticing the work that is going on. This is especially true if you leave your client on all the time.

If you have an old, slower computer though, or if your hard drive is high latency for one reason or another (e.g. you use encryption), make sure these options are set for whatever is best for your situation. Turning off idle time completely is often helpful as some older computers are slow to even recognise--mid task--that you want to use the client again, or take too long to abandon a big task half way through. If you set your client to only do work on shutdown, then you can control exactly when that happens.

Keeping the database vacuumed is important, so if you remove it from the normal maintenance schedule, make sure you run it every now and then manually from database->maintenance->vacuum. It takes a few minutes to run, but it is great for cleaning up a database recently fragged by several million new rows of data.

reducing search and general gui lag

Searching for tags via the autocomplete dropdown and searching for files in general can sometimes take a very long time. It depends on many things. In general, the more predicates (tags and system:something) you have active for a search, the faster it will be. And the more specific the search domain (e.g. "local files" instead of "all known files" and "my tag repo" instead of "all known tags"), the faster it will be.

You can also look at file->options->speed and memory, again especially if you have a slow computer. Increasing the autocomplete thresholds is very often helpful. You can even force autocompletes to only fetch results when you manually ask for them.

Having lots of thumbnails open can slow many things down. If you get lag with 10,000 files open in your searches, try cutting it down to only 1,000 or so. Split your downloading binges and subscriptions into smaller, rarer chunks, and don't try to watch 1080p webms while five other things are going on.

finally - profiles

Lots of my code remains unoptimised for certain situations. My development environment is obviously specific to me and has only a few thousand images and a few million tags. As I write code, I am usually more concerned with getting it to work at all rather than getting it to work fast for every possible scenario. So, if something is running particularly slow for you, but your computer is otherwise working fine, let me know and I can almost always speed it up.

Let me know:

A profile is a large block of debug text that lets me know which parts of my code are running slow for you. Currently, hydrus profiles have three sections. A complete one looks like this.

It is very helpful to me to have a profile. You can generate some by going help->debug->db profile mode, which tells the client to generate profile information for every subsequent database request. This will spam your logfile, so don't leave it on for a very long time (you can turn it off by hitting the help menu entry again).

Turn on profile mode, do the thing that runs slow for you (importing a file, fetching some tags, whatever), and then shut the client down and go to the newly created profile logfile, which should be at install_dir/db/client profile - TIMESTAMP.log. This file will be filled with several sets of tables with timing information for one (or likely more) code calls. You can either copy and paste the data labelled for your problem request (e.g. "import_file") or just send me the whole logfile.

pubsub profile mode is experimental and very log heavy. Feel free to play with it, but it is really for my own purposes. Almost everything that is slow in the program is due to my inefficient database queries.

There are several ways to contact me.