1 Generic SCSI target mid-level for Linux (SCST)
2 ==============================================
4 SCST is designed to provide unified, consistent interface between SCSI
5 target drivers and Linux kernel and simplify target drivers development
6 as much as possible. Detail description of SCST's features and internals
7 could be found in "Generic SCSI Target Middle Level for Linux" document
8 SCST's Internet page http://scst.sourceforge.net.
10 SCST supports the following I/O modes:
12 * Pass-through mode with one to many relationship, i.e. when multiple
13 initiators can connect to the exported pass-through devices, for
14 the following SCSI devices types: disks (type 0), tapes (type 1),
15 processors (type 3), CDROMs (type 5), MO disks (type 7), medium
16 changers (type 8) and RAID controllers (type 0xC)
18 * FILEIO mode, which allows to use files on file systems or block
19 devices as virtual remotely available SCSI disks or CDROMs with
20 benefits of the Linux page cache
22 * BLOCKIO mode, which performs direct block IO with a block device,
23 bypassing page-cache for all operations. This mode works ideally with
24 high-end storage HBAs and for applications that either do not need
25 caching between application and disk or need the large block
28 * User space mode using scst_user device handler, which allows to
29 implement in the user space virtual SCSI devices in the SCST
32 * "Performance" device handlers, which provide in pseudo pass-through
33 mode a way for direct performance measurements without overhead of
34 actual data transferring from/to underlying SCSI device
36 In addition, SCST supports advanced per-initiator access and devices
37 visibility management, so different initiators could see different set
38 of devices with different access permissions. See below for details.
40 This is quite stable (but still beta) version.
45 To see your devices remotely, you need to add them to at least "Default"
46 security group (see below how). By default, no local devices are seen
47 remotely. There must be LUN 0 in each security group, i.e. LUs
48 numeration must not start from, e.g., 1.
50 It is highly recommended to use scstadmin utility for configuring
51 devices and security groups.
53 If you experience problems during modules load or running, check your
54 kernel logs (or run dmesg command for the few most recent messages).
56 IMPORTANT: Without loading appropriate device handler, corresponding devices
57 ========= will be invisible for remote initiators, which could lead to holes
58 in the LUN addressing, so automatic device scanning by remote SCSI
59 mid-level could not notice the devices. Therefore you will have
60 to add them manually via
61 'echo "- - -" >/sys/class/scsi_host/hostX/scan',
62 where X - is the host number.
64 IMPORTANT: Working of target and initiator on the same host isn't
65 ========= supported. This is a limitation of the Linux memory/cache
66 manager, because in this case an OOM deadlock like: system
67 needs some memory -> it decides to clear some cache -> cache
68 needs to write on a target exported device -> initiator sends
69 request to the target -> target needs memory -> problem is
72 IMPORTANT: In the current version simultaneous access to local SCSI devices
73 ========= via standard high-level SCSI drivers (sd, st, sg, etc.) and
74 SCST's target drivers is unsupported. Especially it is
75 important for execution via sg and st commands that change
76 the state of devices and their parameters, because that could
77 lead to data corruption. If any such command is done, at
78 least related device handler(s) must be restarted. For block
79 devices READ/WRITE commands using direct disk handler look to
85 Device specific drivers (device handlers) are plugins for SCST, which
86 help SCST to analyze incoming requests and determine parameters,
87 specific to various types of devices. If an appropriate device handler
88 for a SCSI device type isn't loaded, SCST doesn't know how to handle
89 devices of this type, so they will be invisible for remote initiators
90 (more precisely, "LUN not supported" sense code will be returned).
92 In addition to device handlers for real devices, there are VDISK, user
93 space and "performance" device handlers.
95 VDISK device handler works over files on file systems and makes from
96 them virtual remotely available SCSI disks or CDROM's. In addition, it
97 allows to work directly over a block device, e.g. local IDE or SCSI disk
98 or ever disk partition, where there is no file systems overhead. Using
99 block devices comparing to sending SCSI commands directly to SCSI
100 mid-level via scsi_do_req()/scsi_execute_async() has advantage that data
101 are transferred via system cache, so it is possible to fully benefit from
102 caching and read ahead performed by Linux's VM subsystem. The only
103 disadvantage here that in the FILEIO mode there is superfluous data
104 copying between the cache and SCST's buffers. This issue is going to be
105 addressed in the next release. Virtual CDROM's are useful for remote
106 installation. See below for details how to setup and use VDISK device
109 SCST user space device handler provides an interface between SCST and
110 the user space, which allows to create pure user space devices. The
111 simplest example, where one would want it is if he/she wants to write a
112 VTL. With scst_user he/she can write it purely in the user space. Or one
113 would want it if he/she needs some sophisticated for kernel space
114 processing of the passed data, like encrypting them or making snapshots.
116 "Performance" device handlers for disks, MO disks and tapes in their
117 exec() method skip (pretend to execute) all READ and WRITE operations
118 and thus provide a way for direct link performance measurements without
119 overhead of actual data transferring from/to underlying SCSI device.
121 NOTE: Since "perf" device handlers on READ operations don't touch the
122 ==== commands' data buffer, it is returned to remote initiators as it
123 was allocated, without even being zeroed. Thus, "perf" device
124 handlers impose some security risk, so use them with caution.
129 There are the following compilation options, that could be change using
130 your favorit kernel configuration Makefile target, e.g. "make xconfig":
132 - CONFIG_SCST_DEBUG - if defined, turns on some debugging code,
133 including some logging. Makes the driver considerably bigger and slower,
134 producing large amount of log data.
136 - CONFIG_SCST_TRACING - if defined, turns on ability to log events. Makes the
137 driver considerably bigger and leads to some performance loss.
139 - CONFIG_SCST_EXTRACHECKS - if defined, adds extra validity checks in
142 - CONFIG_SCST_USE_EXPECTED_VALUES - if not defined (default), initiator
143 supplied expected data transfer length and direction will be used only for
144 verification purposes to return error or warn in case if one of them
145 is invalid. Instead, locally decoded from SCSI command values will be
146 used. This is necessary for security reasons, because otherwise a
147 faulty initiator can crash target by supplying invalid value in one
148 of those parameters. This is especially important in case of
149 pass-through mode. If CONFIG_SCST_USE_EXPECTED_VALUES is defined, initiator
150 supplied expected data transfer length and direction will override
151 the locally decoded values. This might be necessary if internal SCST
152 commands translation table doesn't contain SCSI command, which is
153 used in your environment. You can know that if you have messages like
154 "Unknown opcode XX for YY. Should you update scst_scsi_op_table?" in
155 your kernel log and your initiator returns an error. Also report
156 those messages in the SCST mailing list
157 scst-devel@lists.sourceforge.net. Note, that not all SCSI transports
158 support supplying expected values.
160 - CONFIG_SCST_DEBUG_TM - if defined, turns on task management functions
161 debugging, when on LUN 0 in the default access control group some of the
162 commands will be delayed for about 60 sec., so making the remote
163 initiator send TM functions, eg ABORT TASK and TARGET RESET. Also
164 define CONFIG_SCST_TM_DBG_GO_OFFLINE symbol in the Makefile if you
165 want that the device eventually become completely unresponsive, or
166 otherwise to circle around ABORTs and RESETs code. Needs CONFIG_SCST_DEBUG
169 - CONFIG_SCST_STRICT_SERIALIZING - if defined, makes SCST send all commands to
170 underlying SCSI device synchronously, one after one. This makes task
171 management more reliable, with cost of some performance penalty. This
172 is mostly actual for stateful SCSI devices like tapes, where the
173 result of command's execution depends from device's settings defined
174 by previous commands. Disk and RAID devices are stateless in the most
175 cases. The current SCSI core in Linux doesn't allow to abort all
176 commands reliably if they sent asynchronously to a stateful device.
177 Turned off by default, turn it on if you use stateful device(s) and
178 need as much error recovery reliability as possible. As a side
179 effect, no kernel patching is necessary.
181 - CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ - if defined, it will be
182 allowed to submit pass-through commands to real SCSI devices via the SCSI
183 middle layer using scsi_execute_async() function from soft IRQ
184 context (tasklets). This used to be the default, but currently it
185 seems the SCSI middle layer starts expecting only thread context on
186 the IO submit path, so it is disabled now by default. Enabling it
187 will decrease amount of context switches and improve performance. It
188 is more or less safe, in the worst case, if in your configuration the
189 SCSI middle layer really doesn't expect SIRQ context in
190 scsi_execute_async() function, you will get a warning message in the
193 - CONFIG_SCST_STRICT_SECURITY - if defined, makes SCST zero allocated data
194 buffers. Undefining it (default) considerably improves performance
195 and eases CPU load, but could create a security hole (information
196 leakage), so enable it, if you have strict security requirements.
198 - CONFIG_SCST_ABORT_CONSIDER_FINISHED_TASKS_AS_NOT_EXISTING - if defined,
199 in case when TASK MANAGEMENT function ABORT TASK is trying to abort a
200 command, which has already finished, remote initiator, which sent the
201 ABORT TASK request, will receive TASK NOT EXIST (or ABORT FAILED)
202 response for the ABORT TASK request. This is more logical response,
203 since, because the command finished, attempt to abort it failed, but
204 some initiators, particularly VMware iSCSI initiator, consider TASK
205 NOT EXIST response as if the target got crazy and try to RESET it.
206 Then sometimes get crazy itself. So, this option is disabled by
209 - CONFIG_SCST_MEASURE_LATENCY - if defined, provides in /proc/scsi_tgt/latency
210 file average commands processing latency. You can clear already
211 measured results by writing 0 in this file. Note, you need a
212 non-preemptible kernel to have correct results.
214 HIGHMEM kernel configurations are fully supported, but not recommended
215 for performance reasons, except for scst_user, where they are not
216 supported, because this module deals with user supplied memory on a
217 zero-copy manner. If you need to use it, consider change VMSPLIT option
218 or use 64-bit system configuration instead.
220 For changing VMSPLIT option (CONFIG_VMSPLIT to be precise) you should in
221 "make menuconfig" command set the following variables:
223 - General setup->Configure standard kernel features (for small systems): ON
225 - General setup->Prompt for development and/or incomplete code/drivers: ON
227 - Processor type and features->High Memory Support: OFF
229 - Processor type and features->Memory split: according to amount of
230 memory you have. If it is less than 800MB, you may not touch this
236 Module scst supports the following parameters:
238 - scst_threads - allows to set count of SCST's threads. By default it
241 - scst_max_cmd_mem - sets maximum amount of memory in Mb allowed to be
242 consumed by the SCST commands for data buffers at any given time. By
243 default it is approximately TotalMem/4.
245 SCST "/proc" commands
246 ---------------------
248 For communications with user space programs SCST provides proc-based
249 interface in "/proc/scsi_tgt" directory. It contains the following
252 - "help" file, which provides online help for SCST commands
254 - "scsi_tgt" file, which on read provides information of serving by SCST
255 devices and their dev handlers. On write it supports the following
258 * "assign H:C:I:L HANDLER_NAME" assigns dev handler "HANDLER_NAME"
259 on device with host:channel:id:lun
261 - "sessions" file, which lists currently connected initiators (open sessions)
263 - "sgv" file provides some statistic about with which block sizes
264 commands from remote initiators come and how effective sgv_pool in
265 serving those allocations from the cache, i.e. without memory
266 allocations requests to the kernel. "Size" - is the commands data
267 size upper rounded to power of 2, "Hit" - how many there are
268 allocations from the cache, "Total" - total number of allocations.
270 - "threads" file, which allows to read and set number of SCST's threads
272 - "version" file, which shows version of SCST
274 - "trace_level" file, which allows to read and set trace (logging) level
275 for SCST. See "help" file for list of trace levels. If you want to
276 enable logging options, which produce a lot of events, like "debug",
277 to not loose logged events you should also:
279 * Increase in .config of your kernel CONFIG_LOG_BUF_SHIFT variable
280 to much bigger value, then recompile it. For example, I use 25,
281 but to use it I needed to modify the maximum allowed value for
282 CONFIG_LOG_BUF_SHIFT in the corresponding Kconfig.
284 * Change in your /etc/syslog.conf or other config file of your favorite
285 logging program to store kernel logs in async manner. For example,
286 I added in my rsyslog.conf line "kern.info -/var/log/kernel"
287 and added "kern.none" in line for /var/log/messages, so I had:
288 "*.info;kern.none;mail.none;authpriv.none;cron.none /var/log/messages"
290 Each dev handler has own subdirectory. Most dev handler have only two
291 files in this subdirectory: "trace_level" and "type". The first one is
292 similar to main SCST "trace_level" file, the latter one shows SCSI type
293 number of this handler as well as some text description.
295 For example, "echo "assign 1:0:1:0 dev_disk" >/proc/scsi_tgt/scsi_tgt"
296 will assign device handler "dev_disk" to real device sitting on host 1,
297 channel 0, ID 1, LUN 0.
299 Access and devices visibility management (LUN masking)
300 ------------------------------------------------------
302 Access and devices visibility management allows for an initiator or
303 group of initiators to see different devices with different LUNs
304 with necessary access permissions.
306 SCST supports two modes of access control:
308 1. Target-oriented. In this mode you define for each target devices and
309 their LUNs, which are accessible to all initiators, connected to that
310 target. This is a regular access control mode, which people mean
311 thinking about access control in general. For instance, in IET this is
312 the only supported mode. In this mode you should create a security group
313 with name "Default_TARGET_NAME", where "TARGET_NAME" is name of the
314 target, like "Default_iqn.2007-05.com.example:storage.disk1.sys1.xyz"
315 for target "iqn.2007-05.com.example:storage.disk1.sys1.xyz". Then you
316 should add to it all LUNs, available from that target.
318 2. Initiator-oriented. In this mode you define which devices and their
319 LUNs are accessible for each initiator. In this mode you should create
320 for each set of one or more initiators, which should access to the same
321 set of devices with the same LUNs, a separate security group, then add
322 to it available devices and names of allowed initiator(s).
324 Both modes can be used simultaneously. In this case initiator-oriented
325 mode has higher priority, than target-oriented.
327 When a target driver registers itself in SCST core, it tells SCST core
328 its name. Then, when there is a new connection from a remote initiator,
329 the target driver registers this connection in SCST core and tells it
330 name of the remote initiator. Then SCST core finds the corresponding
331 devices for it using the following algorithm:
333 1. It searches through all defined groups trying to find group
334 containing the initiator name. If it succeeds, the found group is used.
336 2. Otherwise, it searches through all groups trying to find group with
337 name "Default_TARGET_NAME". If it succeeds, the found group is used.
339 3. Otherwise, the group with name "Default" is used. This group is
340 always defined, but empty by default.
342 In /proc/scsi_tgt each group represented as "groups/GROUP_NAME/"
343 subdirectory. In it there are files "devices" and "names". File
344 "devices" lists devices and their LUNs in the group, file "names" lists
345 names of initiators, which allowed to access devices in this group.
347 To configure access and devices visibility management SCST provides the
348 following files and directories under /proc/scsi_tgt:
350 - "add_group GROUP" to /proc/scsi_tgt/scsi_tgt adds group "GROUP"
352 - "del_group GROUP" to /proc/scsi_tgt/scsi_tgt deletes group "GROUP"
354 - "add H:C:I:L lun [READ_ONLY]" to /proc/scsi_tgt/groups/GROUP/devices adds
355 device with host:channel:id:lun with LUN "lun" in group "GROUP". Optionally,
356 the device could be marked as read only.
358 - "del H:C:I:L" to /proc/scsi_tgt/groups/GROUP/devices deletes device with
359 host:channel:id:lun from group "GROUP"
361 - "add V_NAME lun [READ_ONLY]" to /proc/scsi_tgt/groups/GROUP/devices adds
362 device with virtual name "V_NAME" with LUN "lun" in group "GROUP".
363 Optionally, the device could be marked as read only.
365 - "del V_NAME" to /proc/scsi_tgt/groups/GROUP/devices deletes device with
366 virtual name "V_NAME" from group "GROUP"
368 - "clear" to /proc/scsi_tgt/groups/GROUP/devices clears the list of devices
371 - "add NAME" to /proc/scsi_tgt/groups/GROUP/names adds name "NAME" to group
374 - "del NAME" to /proc/scsi_tgt/groups/GROUP/names deletes name "NAME" from group
377 - "clear" to /proc/scsi_tgt/groups/GROUP/names clears the list of names
380 There must be LUN 0 in each security group, i.e. LUs numeration must not
385 - "echo "add 1:0:1:0 0" >/proc/scsi_tgt/groups/Default/devices" will
386 add real device sitting on host 1, channel 0, ID 1, LUN 0 to "Default"
389 - "echo "add disk1 1" >/proc/scsi_tgt/groups/Default/devices" will
390 add virtual VDISK device with name "disk1" to "Default" group
393 Consider you need to have an iSCSI target with name
394 "iqn.2007-05.com.example:storage.disk1.sys1.xyz" (you defined it in
395 iscsi-scst.conf), which should export virtual device "dev1" with LUN 0
396 and virtual device "dev2" with LUN 1, but initiator with name
397 "iqn.2007-05.com.example:storage.disk1.spec_ini.xyz" should see only
398 virtual device "dev2" with LUN 0. To achieve that you should do the
401 # echo "add_group Default_iqn.2007-05.com.example:storage.disk1.sys1.xyz" >/proc/scsi_tgt/scsi_tgt
402 # echo "add dev1 0" >/proc/scsi_tgt/groups/Default_iqn.2007-05.com.example:storage.disk1.sys1.xyz/devices
403 # echo "add dev2 1" >/proc/scsi_tgt/groups/Default_iqn.2007-05.com.example:storage.disk1.sys1.xyz/devices
405 # echo "add_group spec_ini" >/proc/scsi_tgt/scsi_tgt
406 # echo "add iqn.2007-05.com.example:storage.disk1.spec_ini.xyz" >/proc/scsi_tgt/groups/spec_ini/names
407 # echo "add dev2 0" >/proc/scsi_tgt/groups/spec_ini/devices
409 It is highly recommended to use scstadmin utility instead of described
410 in this section low level interface.
415 All the access control must be fully configured BEFORE load of the
416 corresponding target driver! When you load a target driver or enable
417 target mode in it, as for qla2x00t driver, it will immediately start
418 accepting new connections, hence creating new sessions, and those new
419 sessions will be assigned to security groups according to the
420 *currently* configured access control settings. For instance, to
421 "Default" group, instead of "HOST004" as you need, because "HOST004"
422 doesn't exist yet. So, one must configure all the security groups before
423 new connections from the initiators are created, i.e. before target
426 Access controls can be altered after the target driver loaded as long as
427 the target session doesn't yet exist. And even in the case of the
428 session already existing, changes are still possible, but won't be
429 reflected on the initiator side.
431 So, the safest choice is to configure all the access control before any
432 target driver load and then only add new devices to new groups for new
433 initiators or add new devices to old groups, but not altering existing
439 After loading VDISK device handler creates in "/proc/scsi_tgt/"
440 subdirectories "vdisk" and "vcdrom". They have similar layout:
442 - "trace_level" and "type" files as described for other dev handlers
444 - "help" file, which provides online help for VDISK commands
446 - "vdisk"/"vcdrom" files, which on read provides information of
447 currently open device files. On write it supports the following
450 * "open NAME [PATH] [BLOCK_SIZE] [FLAGS]" - opens file "PATH" as
451 device "NAME" with block size "BLOCK_SIZE" bytes with flags
452 "FLAGS". "PATH" could be empty only for VDISK CDROM. "BLOCK_SIZE"
453 and "FLAGS" are valid only for disk VDISK. The block size must be
454 power of 2 and >= 512 bytes. Default is 512. Possible flags:
456 - WRITE_THROUGH - write back caching disabled. Note, this option
457 has sense only if you also *manually* disable write-back cache
458 in *all* your backstorage devices and make sure it's actually
459 disabled, since many devices are known to lie about this mode to
460 get better benchmark results.
462 - READ_ONLY - read only
464 - O_DIRECT - both read and write caching disabled. This mode
465 isn't currently fully implemented, you should use user space
466 fileio_tgt program in O_DIRECT mode instead (see below).
468 - NULLIO - in this mode no real IO will be done, but success will be
469 returned. Intended to be used for performance measurements at the same
470 way as "*_perf" handlers.
472 - NV_CACHE - enables "non-volatile cache" mode. In this mode it is
473 assumed that the target has a GOOD UPS with ability to cleanly
474 shutdown target in case of power failure and it is
475 software/hardware bugs free, i.e. all data from the target's
476 cache are guaranteed sooner or later to go to the media. Hence
477 all data synchronization with media operations, like
478 SYNCHRONIZE_CACHE, are ignored in order to bring more
479 performance. Also in this mode target reports to initiators that
480 the corresponding device has write-through cache to disable all
481 write-back cache workarounds used by initiators. Use with
482 extreme caution, since in this mode after a crash of the target
483 journaled file systems don't guarantee the consistency after
484 journal recovery, therefore manual fsck MUST be ran. Note, that
485 since usually the journal barrier protection (see "IMPORTANT"
486 note below) turned off, enabling NV_CACHE could change nothing
487 from data protection point of view, since no data
488 synchronization with media operations will go from the
489 initiator. This option overrides WRITE_THROUGH.
491 - BLOCKIO - enables block mode, which will perform direct block
492 IO with a block device, bypassing page-cache for all operations.
493 This mode works ideally with high-end storage HBAs and for
494 applications that either do not need caching between application
495 and disk or need the large block throughput. See also below.
497 - REMOVABLE - with this flag set the device is reported to remote
498 initiators as removable.
500 * "close NAME" - closes device "NAME".
502 * "change NAME [PATH]" - changes a virtual CD in the VDISK CDROM.
504 By default, if neither BLOCKIO, nor NULLIO option is supplied, FILEIO
507 For example, "echo "open disk1 /vdisks/disk1" >/proc/scsi_tgt/vdisk/vdisk"
508 will open file /vdisks/disk1 as virtual FILEIO disk with name "disk1".
510 CAUTION: If you partitioned/formatted your device with block size X, *NEVER*
511 ======== ever try to export and then mount it (even accidentally) with another
512 block size. Otherwise you can *instantly* damage it pretty
513 badly as well as all your data on it. Messages on initiator
514 like: "attempt to access beyond end of device" is the sign of
517 Moreover, if you want to compare how well different block sizes
518 work for you, you **MUST** EVERY TIME AFTER CHANGING BLOCK SIZE
519 **COMPLETELY** **WIPE OFF** ALL THE DATA FROM THE DEVICE. In
520 other words, THE **WHOLE** DEVICE **MUST** HAVE ONLY **ZEROS**
521 AS THE DATA AFTER YOU SWITCH TO NEW BLOCK SIZE. Switching block
522 sizes isn't like switching between FILEIO and BLOCKIO, after
523 changing block size all previously written with another block
524 size data MUST BE ERASED. Otherwise you will have a full set of
525 very weird behaviors, because blocks addressing will be
526 changed, but initiators in most cases will not have a
527 possibility to detect that old addresses written on the device
528 in, e.g., partition table, don't refer anymore to what they are
531 IMPORTANT: By default for performance reasons VDISK FILEIO devices use write
532 ========= back caching policy. This is generally safe from the consistence of
533 journaled file systems, laying over them, point of view, but
534 your unsaved cached data will be lost in case of
535 power/hardware/software failure, so you must supply your
536 target server with some kind of UPS or disable write back
537 caching using WRITE_THROUGH flag. You also should note, that
538 the file systems journaling over write back caching enabled
539 devices works reliably *ONLY* if the order of journal writes
540 is guaranteed or it uses some kind of data protection
541 barriers (i.e. after writing journal data some kind of
542 synchronization with media operations is used), otherwise,
543 because of possible reordering in the cache, even after
544 successful journal rollback, you very much risk to loose your
545 data on the FS. Currently, Linux IO subsystem guarantees
546 order of write operations only using data protection
547 barriers. Some info about it from the XFS point of view could
548 be found at http://oss.sgi.com/projects/xfs/faq.html#wcache.
549 On Linux initiators for EXT3 and ReiserFS file systems the
550 barrier protection could be turned on using "barrier=1" and
551 "barrier=flush" mount options correspondingly. Note, that
552 usually it turned off by default and the status of barriers
553 usage isn't reported anywhere in the system logs as well as
554 there is no way to know it on the mounted file system (at
555 least no known one). Windows and, AFAIK, other UNIX'es don't
556 need any special explicit options and do necessary barrier
557 actions on write-back caching devices by default. Also note
558 that on some real-life workloads write through caching might
559 perform better, than write back one with the barrier
560 protection turned on.
561 Also you should realize that Linux doesn't provide a
562 guarantee that after sync()/fsync() all written data really
563 hit permanent storage, they can be then in the cache of your
564 backstorage device and lost on power failure event. Thus,
565 ever with write-through cache mode, you still need a good UPS
566 to protect yourself from your data loss (note, data loss, not
567 the file system integrity corruption).
569 IMPORTANT: Some disk and partition table management utilities don't support
570 ========= block sizes >512 bytes, therefore make sure that your favorite one
571 supports it. Currently only cfdisk is known to work only with
572 512 bytes blocks, other utilities like fdisk on Linux or
573 standard disk manager on Windows are proved to work well with
574 non-512 bytes blocks. Note, if you export a disk file or
575 device with some block size, different from one, with which
576 it was already partitioned, you could get various weird
577 things like utilities hang up or other unexpected behavior.
578 Hence, to be sure, zero the exported file or device before
579 the first access to it from the remote initiator with another
580 block size. On Window initiator make sure you "Set Signature"
581 in the disk manager on the imported from the target drive
582 before doing any other partitioning on it. After you
583 successfully mounted a file system over non-512 bytes block
584 size device, the block size stops matter, any program will
585 work with files on such file system.
590 This module works best for these types of scenarios:
592 1) Data that are not aligned to 4K sector boundaries and <4K block sizes
593 are used, which is normally found in virtualization environments where
594 operating systems start partitions on odd sectors (Windows and it's
597 2) Large block data transfers normally found in database loads/dumps and
600 3) Advanced relational database systems that perform their own caching
601 which prefer or demand direct IO access and, because of the nature of
602 their data access, can actually see worse performance with
603 non-discriminate caching.
605 4) Multiple layers of targets were the secondary and above layers need
606 to have a consistent view of the primary targets in order to preserve
607 data integrity which a page cache backed IO type might not provide
610 Also it has an advantage over FILEIO that it doesn't copy data between
611 the system cache and the commands data buffers, so it saves a
612 considerable amount of CPU power and memory bandwidth.
614 IMPORTANT: Since data in BLOCKIO and FILEIO modes are not consistent between
615 ========= them, if you try to use a device in both those modes simultaneously,
616 you will almost instantly corrupt your data on that device.
621 In the pass-through mode (i.e. using the pass-through device handlers
622 scst_disk, scst_tape, etc) SCSI commands, coming from remote initiators,
623 are passed to local SCSI hardware on target as is, without any
624 modifications. As any other hardware, the local SCSI hardware can not
625 handle commands with amount of data and/or segments count in
626 scatter-gather array bigger some values. Therefore, when using the
627 pass-through mode you should note that values for maximum number of
628 segments and maximum amount of transferred data for each SCSI command on
629 devices on initiators can not be bigger, than corresponding values of
630 the corresponding SCSI devices on the target. Otherwise you will see
631 symptoms like small transfers work well, but large ones stall and
632 messages like: "Unable to complete command due to SG IO count
633 limitation" are printed in the kernel logs.
635 You can't control from the user space limit of the scatter-gather
636 segments, but for block devices usually it is sufficient if you set on
637 the initiators /sys/block/DEVICE_NAME/queue/max_sectors_kb in the same
638 or lower value as in /sys/block/DEVICE_NAME/queue/max_hw_sectors_kb for
639 the corresponding devices on the target.
641 For not-block devices SCSI commands are usually generated directly by
642 applications, so, if you experience large transfers stalls, you should
643 check documentation for your application how to limit the transfer
646 User space mode using scst_user dev handler
647 -------------------------------------------
649 User space program fileio_tgt uses interface of scst_user dev handler
650 and allows to see how it works in various modes. Fileio_tgt provides
651 mostly the same functionality as scst_vdisk handler with the most
652 noticeable difference that it supports O_DIRECT mode. O_DIRECT mode is
653 basically the same as BLOCKIO, but also supports files, so for some
654 loads it could be significantly faster, than the regular FILEIO access.
655 All the words about BLOCKIO from above apply to O_DIRECT as well. See
656 fileio_tgt's README file for more details.
661 Before doing any performance measurements note that:
663 I. Performance results are very much dependent from your type of load,
664 so it is crucial that you choose access mode (FILEIO, BLOCKIO,
665 O_DIRECT, pass-through), which suits your needs the best.
667 II. In order to get the maximum performance you should:
671 - Disable in Makefile CONFIG_SCST_STRICT_SERIALIZING, CONFIG_SCST_EXTRACHECKS,
672 CONFIG_SCST_TRACING, CONFIG_SCST_DEBUG*, CONFIG_SCST_STRICT_SECURITY
674 - For pass-through devices enable
675 CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ.
677 2. For target drivers:
679 - Disable in Makefiles CONFIG_SCST_EXTRACHECKS, CONFIG_SCST_TRACING,
682 3. For device handlers, including VDISK:
684 - Disable in Makefile CONFIG_SCST_TRACING and CONFIG_SCST_DEBUG.
686 - If your initiator(s) use dedicated exported from the target virtual
687 SCSI devices and have more or equal amount of memory, than the
688 target, it is recommended to use O_DIRECT option (currently it is
689 available only with fileio_tgt user space program) or BLOCKIO. With
690 them you could have up to 100% increase in throughput.
692 IMPORTANT: Some of the compilation options enabled by default, i.e. SCST
693 ========= is optimized currently rather for development and bug hunting,
694 than for performance.
696 If you use SCST version taken directly from the SVN repository, you can
697 set the above options, except CONFIG_SCST_ALLOW_PASSTHROUGH_IO_SUBMIT_IN_SIRQ,
698 using debug2perf Makefile target.
700 4. For other target and initiator software parts:
702 - Don't enable debug/hacking features in the kernel, i.e. use them as
705 - The default kernel read-ahead and queuing settings are optimized
706 for locally attached disks, therefore they are not optimal if they
707 attached remotely (SCSI target case), which sometimes could lead to
708 unexpectedly low throughput. You should increase read-ahead size to at
709 least 512KB or even more on all initiators and the target.
711 You should also limit on all initiators maximum amount of sectors per
712 SCSI command. To do it on Linux initiators, run:
714 echo “64” > /sys/block/sdX/queue/max_sectors_kb
716 where specify instead of X your imported from target device letter,
719 To increase read-ahead size on Linux, run:
721 blockdev --setra N /dev/sdX
723 where N is a read-ahead number in 512-byte sectors and X is a device
726 Note: you need to set read-ahead setting for device sdX again after
727 you changed the maximum amount of sectors per SCSI command for that
730 - You may need to increase amount of requests that OS on initiator
731 sends to the target device. To do it on Linux initiators, run
733 echo “64” > /sys/block/sdX/queue/nr_requests
735 where X is a device letter like above.
737 You may also experiment with other parameters in /sys/block/sdX
738 directory, they also affect performance. If you find the best values,
739 please share them with us.
741 - On the target CFQ IO scheduler. In most cases it has performance
742 advantage over other IO schedulers, sometimes huge (2+ times
743 aggregate throughput increase).
745 - It is recommended to turn the kernel preemption off, i.e. set
746 the kernel preemption model to "No Forced Preemption (Server)".
748 - Looks like XFS is the best filesystem on the target to store device
749 files, because it allows considerably better linear write throughput,
752 5. For hardware on target.
754 - Make sure that your target hardware (e.g. target FC or network card)
755 and underlaying IO hardware (e.g. IO card, like SATA, SCSI or RAID to
756 which your disks connected) don't share the same PCI bus. You can
757 check it using lspci utility. They have to work in parallel, so it
758 will be better if they don't compete for the bus. The problem is not
759 only in the bandwidth, which they have to share, but also in the
760 interaction between cards during that competition. This is very
761 important, because in some cases if target and backend storage
762 controllers share the same PCI bus, it could lead up to 5-10 times
763 less performance, than expected. Moreover, some motherboard (by
764 Supermicro, particularly) have serious stability issues if there are
765 several high speed devices on the same bus working in parallel. If
766 you have no choice, but PCI bus sharing, set in the BIOS PCI latency
769 6. If you use VDISK IO module in FILEIO mode, NV_CACHE option will
770 provide you the best performance. But using it make sure you use a good
771 UPS with ability to shutdown the target on the power failure.
773 IMPORTANT: If you use on initiator some versions of Windows (at least W2K)
774 ========= you can't get good write performance for VDISK FILEIO devices with
775 default 512 bytes block sizes. You could get about 10% of the
776 expected one. This is because of the partition alignment, which
777 is (simplifying) incompatible with how Linux page cache
778 works, so for each write the corresponding block must be read
779 first. Use 4096 bytes block sizes for VDISK devices and you
780 will have the expected write performance. Actually, any OS on
781 initiators, not only Windows, will benefit from block size
782 max(PAGE_SIZE, BLOCK_SIZE_ON_UNDERLYING_FS), where PAGE_SIZE
783 is the page size, BLOCK_SIZE_ON_UNDERLYING_FS is block size
784 on the underlying FS, on which the device file located, or 0,
785 if a device node is used. Both values are from the target.
786 See also important notes about setting block sizes >512 bytes
787 for VDISK FILEIO devices above.
789 What if target's backstorage is too slow
790 ----------------------------------------
792 If under high load you experience I/O stalls or see in the kernel log on
793 the target abort or reset messages, then your backstorage is too slow
794 comparing with your target link speed and amount of simultaneously
795 queued commands. On some seek intensive workloads even fast disks or
796 RAIDs, which able to serve continuous data stream on 500+ MB/s speed,
797 can be as slow as 0.3 MB/s. Another possible cause for that can be
798 MD/LVM/RAID on your target as in http://lkml.org/lkml/2008/2/27/96
799 (check the whole thread as well).
801 Thus, in such situations simply processing of one or more commands takes
802 too long time, hence initiator decides that they are stuck on the target
803 and tries to recover. Particularly, it is known that the default amount
804 of simultaneously queued commands (48) is sometimes too high if you do
805 intensive writes from VMware on a target disk, which uses LVM in the
806 snapshot mode. In this case value like 16 or even 8-10 depending of your
807 backstorage speed could be more appropriate.
809 Unfortunately, currently SCST lacks dynamic I/O flow control, when the
810 queue depth on the target is dynamically decreased/increased based on
811 how slow/fast the backstorage speed comparing to the target link. So,
812 there are only 5 possible actions, which you can do to workaround or fix
815 1. Ignore incoming task management (TM) commands. It's fine if there are
816 not too many of them, so average performance isn't hurt and the
817 corresponding device isn't put offline, i.e. if the backstorage isn't
820 2. Decrease /sys/block/sdX/device/queue_depth on the initiator in case
821 if it's Linux (see below how) or/and SCST_MAX_TGT_DEV_COMMANDS constant
822 in scst_priv.h file until you stop seeing incoming TM commands.
823 ISCSI-SCST driver also has its own iSCSI specific parameter for that.
825 3. Try to avoid such seek intensive workloads.
827 4. Insrease speed of the target's backstorage.
829 5. Implement in SCST the dynamic I/O flow control.
831 To decrease device queue depth on Linux initiators run command:
833 # echo Y >/sys/block/sdX/device/queue_depth
835 where Y is the new number of simultaneously queued commands, X - your
836 imported device letter, like 'a' for sda device. There are no special
837 limitations for Y value, it can be any value from 1 to possible maximum
838 (usually, 32), so start from dividing the current value on 2, i.e. set
839 16, if /sys/block/sdX/device/queue_depth contains 32.
841 Note, that logged messages about QUEUE_FULL status are quite different
842 by nature. This is a normal work, just SCSI flow control in action.
843 Simply don't enable "mgmt_minor" logging level, or, alternatively, if
844 you are confident in the worst case performance of your back-end
845 storage, you can increase SCST_MAX_TGT_DEV_COMMANDS in scst_priv.h to
846 64. Usually initiators don't try to push more commands on the target.
853 * Mark Buechler <mark.buechler@gmail.com> for a lot of useful
854 suggestions, bug reports and help in debugging.
856 * Ming Zhang <mingz@ele.uri.edu> for fixes and comments.
858 * Nathaniel Clark <nate@misrule.us> for fixes and comments.
860 * Calvin Morrow <calvin.morrow@comcast.net> for testing and useful
863 * Hu Gang <hugang@soulinfo.com> for the original version of the
866 * Erik Habbinga <erikhabbinga@inphase-tech.com> for fixes and support
867 of the LSI target driver.
869 * Ross S. W. Walker <rswwalker@hotmail.com> for the original block IO
870 code and Vu Pham <huongvp@yahoo.com> who updated it for the VDISK dev
873 * Michael G. Byrnes <michael.byrnes@hp.com> for fixes.
875 * Alessandro Premoli <a.premoli@andxor.it> for fixes
877 * Nathan Bullock <nbullock@yottayotta.com> for fixes.
879 * Terry Greeniaus <tgreeniaus@yottayotta.com> for fixes.
881 * Krzysztof Blaszkowski <kb@sysmikro.com.pl> for many fixes and bug reports.
883 * Jianxi Chen <pacers@users.sourceforge.net> for fixing problem with
886 * Bart Van Assche <bart.vanassche@gmail.com> for a lot of help
888 Vladislav Bolkhovitin <vst@vlnb.net>, http://scst.sourceforge.net