hyperthreading disabled, one NVMe device Run fio on a single NVMe device. We see that with AES-NI it saturates the device without consuming too much CPU time. QAT with 512-byte sector size is totally unuseable (200 - 300 MiB/s) QAT with 4kB sector size has read throughput is less than AES-NI and QAT consumes more CPU time. QAT with 4kB sector size saturates the device when writing to it - but there was a deadlock that can be fixed by the patch. fio --ioengine=psync --iodepth=1 --rw=randread --direct=1 --end_fsync=1 --bs=64k --numjobs=56 --time_based --runtime=10 --group_reporting --name=job --filename=/dev/mapper/cr fio --ioengine=psync --iodepth=1 --rw=randwrite --direct=1 --end_fsync=1 --bs=64k --numjobs=56 --time_based --runtime=10 --group_reporting --name=job --filename=/dev/mapper/cr raw device: READ: bw=2893MiB/s busy: 0.591141% idle: 99.214848% irq: 0.112351% WRITE: bw=1094MiB/s busy: 0.436918% idle: 99.473930% irq: 0.044901% dm-crypt, aes-ni, 4096: READ: bw=2985MiB/s busy: 2.033783% idle: 97.618126% irq: 0.158701% WRITE: bw=1092MiB/s busy: 2.877080% idle: 96.933613% irq: 0.098376% dm-crypt, qat, 4096: READ: bw=1779MiB/s busy: 3.345561% idle: 93.220190% irq: 1.828929% WRITE: -- !!!! QAT OCCASIONALY DEADLOCKS !!!!; fix with qat-fix.patch WRITE: bw=1093MiB/s busy: 3.831038% idle: 94.548982% irq: 0.919794% dm-crypt, aes-ni, 512: READ: bw=2863MiB/s busy: 2.653832% idle: 96.996943% irq: 0.150217 WRITE: bw=1097MiB/s busy: 3.420346% idle: 96.392972% irq: 0.103595% dm-crypt, qat, 512: READ: bw=335MiB/s busy: 3.235961% idle: 93.666922% irq: 1.726651% WRITE: bw=243MiB/s busy: 3.033301% idle: 93.827130% irq: 1.769282%