So, reality caught up with us over the weekend; our physical machine broke down and now we're basically skipping the testing phase (https://www.reddit.com/r/filemaker/comments/1qjxa3z/virtual\_machine\_considerations/).
We have about 55GB worth of databases and they need to be able to perform weekly exports. One of the larger files in here is about 4.5GB and has just shy of a million rows.
When I'm doing an export in here as a script I manage to write them out at a speed of about 42 records per minute (using Perform Script on Server).
The script looks like this:
Go to Layout [“LdocExt” (name)]
Set Variable[$filename; Value:"name.csv"]
Set Variable[$directory; Value:Get ( ScriptParameter )]
Set Variable[$target; Value:$directory & $filename]
Show All Records
Export Records[File Name:“$target”; Create folders:Yes; Character Set:“Unicode (UTF-8)”; Field Order: ....]
In field order I have 134 fields, 2 of them are from related tables.
When I go to the same layout > File > export Records and recreate the same 134 fields, my export speed increases to about 1100 records per minute.
Still much slower than we had before on a physical machine well over a decade old.
I'm not at all familiar with Filemaker, but what can be done to speed up these exports? We're on e a virtual Machine with 8 cores and 32GB RAM + 2 separate drives with high IO. During an export not a single resource seems to be constrained. Cache according to stats.log file is about 100% (seems a bit weird to me, but I'm assuming this is actually a good thing?)
Other things that are a lot slower than on our physical machine is the login phase - on the old machine, you logged in and you get a list of all databases you have access too instantaneously. Now it takes a minute - easily. I'm not sure why that is as not a single resource seems to be constrained. all of my filemaker server processes together are only using 1GB of memory.
The virtual machine's setttings are managed in Hypervisor - I have no access to this. The VM is allowed to grow memory consumption up to 32GB dynamically but it does not assign itself more than 8GB and for now keeps it's usage under 80%. Should I try to force Hyperv to assign the full 32 GB and keep it available? Or is something more sinister going on?