SELinuxにはじかれていろいろできない場合の対処法。
よくやること
・getenforceでSELinuxの状態確認
・/var/log/audit/audit.logの確認
・ls -Z でコンテキストの確認
・audit2allow /var/log/audit/audit.logで、対処方法の確認
・setseboolで、ルールの有効無効の設定
・semoduleでポリシー設定
ポリシー設定方法
teファイルにポリシーを記述
# mymod.te
module mymod 1.0;
require {
type httpd_t;
type clamscan_exec_t;
type clamd_var_lib_t;
class file { getattr read open };
class dir { read getattr open search };
}
#============= httpd_t ==============
allow httpd_t clamscan_exec_t:file { getattr read open };
allow httpd_t clamd_var_lib_t:dir { read getattr open search };
allow httpd_t clamd_var_lib_t:file { read getattr open };
/usr/share/selinux/develのMakefileでmakeし、ppファイルを作成
semoduleでモジュールをインストール
semodule -i mymod.pp
SELinuxをいったんPermissiveにして、auditlogにある程度情報を集め、その後audit2allowでポリシー作成、ポリシー適用、の流れが一番楽かも。
2014年6月12日木曜日
2014年6月6日金曜日
linux上でfioを用いてIOPSを計測
IOPSが計測したい、と思いつき、調べてみるとfioというツールで簡単に計測出来るらしい。
というわけで計測してみた。
手元のCloudNで。OSはCent6.3。
fioはyumでinstall可能。
設定ファイルを作成し、引数で指定して実行する。
---write.fio---
[global]
ioengine=libaio
direct=1
invalidate=1
group_reporting
directory=/home
filename=test.bin
runtime=60
[Rand-Write-4k-qd32]
readwrite=randwrite
size=4G
bs=4k
iodepth=32
numjobs=1
書き込み性能の計測。
> fio write.fio
Rand-Write-4k-qd32: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.0.13
Starting 1 process
Rand-Write-4k-qd32: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [w] [1.6% done] [0K/1036K/0K /s] [0 /259 /0 iops] [eta 01h:04m:23s]
Rand-Write-4k-qd32: (groupid=0, jobs=1): err= 0: pid=6399: Fri Jun 6 16:47:56 2014
write: io=65324KB, bw=1087.5KB/s, iops=271 , runt= 60069msec
slat (usec): min=9 , max=147169 , avg=83.39, stdev=2505.22
clat (usec): min=901 , max=319079 , avg=117614.32, stdev=60605.75
lat (usec): min=939 , max=319103 , avg=117698.36, stdev=60659.33
clat percentiles (msec):
| 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 37], 20.00th=[ 56],
| 30.00th=[ 75], 40.00th=[ 97], 50.00th=[ 120], 60.00th=[ 139],
| 70.00th=[ 157], 80.00th=[ 176], 90.00th=[ 198], 95.00th=[ 212],
| 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 306],
| 99.99th=[ 318]
bw (KB/s) : min= 631, max= 1466, per=100.00%, avg=1090.50, stdev=189.22
lat (usec) : 1000=0.01%
lat (msec) : 2=0.01%, 4=0.05%, 10=0.06%, 20=2.65%, 50=14.48%
lat (msec) : 100=23.94%, 250=57.78%, 500=1.03%
cpu : usr=0.16%, sys=0.75%, ctx=6276, majf=0, minf=22
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.8%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=16331/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=65324KB, aggrb=1087KB/s, minb=1087KB/s, maxb=1087KB/s, mint=60069msec, maxt=60069msec
Disk stats (read/write):
dm-0: ios=0/17652, merge=0/0, ticks=0/2025293, in_queue=2030659, util=99.92%, aggrios=0/16405, aggrmerge=0/1259, aggrticks=0/1913289, aggrin_queue=1913280, aggrutil=99.89%
vda: ios=0/16405, merge=0/1259, ticks=0/1913289, in_queue=1913280, util=99.89%
書き込みIOPS271。
続いて読み込み。
----read.fio----
[global]
ioengine=libaio
direct=1
invalidate=1
group_reporting
directory=/home
filename=test.bin
runtime=60
[Rand-Read-4k-qd32]
readwrite=randread
size=4G
bs=4k
iodepth=32
numjobs=1
>fio read.fio
Rand-Read-4k-qd32: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.0.13
Starting 1 process
Jobs: 1 (f=1): [r] [83.3% done] [773.6M/0K/0K /s] [198K/0 /0 iops] [eta 00m:02s]
Rand-Read-4k-qd32: (groupid=0, jobs=1): err= 0: pid=6416: Fri Jun 6 16:53:21 2014
read : io=4096.0MB, bw=425256KB/s, iops=106314 , runt= 9863msec
slat (usec): min=2 , max=441 , avg= 3.41, stdev= 3.71
clat (usec): min=0 , max=109809 , avg=296.16, stdev=1518.26
lat (usec): min=4 , max=109820 , avg=299.88, stdev=1520.24
clat percentiles (usec):
| 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143],
| 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 157],
| 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 177], 95.00th=[ 181],
| 99.00th=[ 4640], 99.50th=[ 9280], 99.90th=[24448], 99.95th=[30592],
| 99.99th=[42240]
bw (KB/s) : min= 9940, max=840800, per=96.25%, avg=409291.58, stdev=392536.90
lat (usec) : 2=0.01%, 10=0.02%, 20=0.01%, 50=0.01%, 100=0.03%
lat (usec) : 250=98.17%, 500=0.19%, 750=0.02%, 1000=0.01%
lat (msec) : 2=0.04%, 4=0.38%, 10=0.67%, 20=0.28%, 50=0.17%
lat (msec) : 100=0.01%, 250=0.01%
cpu : usr=14.73%, sys=44.41%, ctx=10779, majf=0, minf=56
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=4096.0MB, aggrb=425256KB/s, minb=425256KB/s, maxb=425256KB/s, mint=9863msec, maxt=9863msec
Disk stats (read/write):
dm-0: ios=16331/10, merge=0/0, ticks=149442/415, in_queue=149857, util=48.35%, aggrios=16331/4, aggrmerge=0/7, aggrticks=149418/114, aggrin_queue=149521, aggrutil=47.25%
vda: ios=16331/4, merge=0/7, ticks=149418/114, in_queue=149521, util=47.25%
10万・・・?
というわけで計測してみた。
手元のCloudNで。OSはCent6.3。
fioはyumでinstall可能。
設定ファイルを作成し、引数で指定して実行する。
---write.fio---
[global]
ioengine=libaio
direct=1
invalidate=1
group_reporting
directory=/home
filename=test.bin
runtime=60
[Rand-Write-4k-qd32]
readwrite=randwrite
size=4G
bs=4k
iodepth=32
numjobs=1
書き込み性能の計測。
> fio write.fio
Rand-Write-4k-qd32: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.0.13
Starting 1 process
Rand-Write-4k-qd32: Laying out IO file(s) (1 file(s) / 4096MB)
Jobs: 1 (f=1): [w] [1.6% done] [0K/1036K/0K /s] [0 /259 /0 iops] [eta 01h:04m:23s]
Rand-Write-4k-qd32: (groupid=0, jobs=1): err= 0: pid=6399: Fri Jun 6 16:47:56 2014
write: io=65324KB, bw=1087.5KB/s, iops=271 , runt= 60069msec
slat (usec): min=9 , max=147169 , avg=83.39, stdev=2505.22
clat (usec): min=901 , max=319079 , avg=117614.32, stdev=60605.75
lat (usec): min=939 , max=319103 , avg=117698.36, stdev=60659.33
clat percentiles (msec):
| 1.00th=[ 16], 5.00th=[ 26], 10.00th=[ 37], 20.00th=[ 56],
| 30.00th=[ 75], 40.00th=[ 97], 50.00th=[ 120], 60.00th=[ 139],
| 70.00th=[ 157], 80.00th=[ 176], 90.00th=[ 198], 95.00th=[ 212],
| 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 306],
| 99.99th=[ 318]
bw (KB/s) : min= 631, max= 1466, per=100.00%, avg=1090.50, stdev=189.22
lat (usec) : 1000=0.01%
lat (msec) : 2=0.01%, 4=0.05%, 10=0.06%, 20=2.65%, 50=14.48%
lat (msec) : 100=23.94%, 250=57.78%, 500=1.03%
cpu : usr=0.16%, sys=0.75%, ctx=6276, majf=0, minf=22
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=99.8%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=16331/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
WRITE: io=65324KB, aggrb=1087KB/s, minb=1087KB/s, maxb=1087KB/s, mint=60069msec, maxt=60069msec
Disk stats (read/write):
dm-0: ios=0/17652, merge=0/0, ticks=0/2025293, in_queue=2030659, util=99.92%, aggrios=0/16405, aggrmerge=0/1259, aggrticks=0/1913289, aggrin_queue=1913280, aggrutil=99.89%
vda: ios=0/16405, merge=0/1259, ticks=0/1913289, in_queue=1913280, util=99.89%
書き込みIOPS271。
続いて読み込み。
----read.fio----
[global]
ioengine=libaio
direct=1
invalidate=1
group_reporting
directory=/home
filename=test.bin
runtime=60
[Rand-Read-4k-qd32]
readwrite=randread
size=4G
bs=4k
iodepth=32
numjobs=1
>fio read.fio
Rand-Read-4k-qd32: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.0.13
Starting 1 process
Jobs: 1 (f=1): [r] [83.3% done] [773.6M/0K/0K /s] [198K/0 /0 iops] [eta 00m:02s]
Rand-Read-4k-qd32: (groupid=0, jobs=1): err= 0: pid=6416: Fri Jun 6 16:53:21 2014
read : io=4096.0MB, bw=425256KB/s, iops=106314 , runt= 9863msec
slat (usec): min=2 , max=441 , avg= 3.41, stdev= 3.71
clat (usec): min=0 , max=109809 , avg=296.16, stdev=1518.26
lat (usec): min=4 , max=109820 , avg=299.88, stdev=1520.24
clat percentiles (usec):
| 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 143],
| 30.00th=[ 147], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 157],
| 70.00th=[ 159], 80.00th=[ 167], 90.00th=[ 177], 95.00th=[ 181],
| 99.00th=[ 4640], 99.50th=[ 9280], 99.90th=[24448], 99.95th=[30592],
| 99.99th=[42240]
bw (KB/s) : min= 9940, max=840800, per=96.25%, avg=409291.58, stdev=392536.90
lat (usec) : 2=0.01%, 10=0.02%, 20=0.01%, 50=0.01%, 100=0.03%
lat (usec) : 250=98.17%, 500=0.19%, 750=0.02%, 1000=0.01%
lat (msec) : 2=0.04%, 4=0.38%, 10=0.67%, 20=0.28%, 50=0.17%
lat (msec) : 100=0.01%, 250=0.01%
cpu : usr=14.73%, sys=44.41%, ctx=10779, majf=0, minf=56
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0
Run status group 0 (all jobs):
READ: io=4096.0MB, aggrb=425256KB/s, minb=425256KB/s, maxb=425256KB/s, mint=9863msec, maxt=9863msec
Disk stats (read/write):
dm-0: ios=16331/10, merge=0/0, ticks=149442/415, in_queue=149857, util=48.35%, aggrios=16331/4, aggrmerge=0/7, aggrticks=149418/114, aggrin_queue=149521, aggrutil=47.25%
vda: ios=16331/4, merge=0/7, ticks=149418/114, in_queue=149521, util=47.25%
10万・・・?
UnixBenchでCloudNのComputeFlatをベンチマーク
NTTComのクラウドサービス、CloudNのComputeFlatを契約することがあったので、せっかくの機会にとUnixBenchでベンチマーク。プランは1CPU2GBです。
以下結果。
========================================================================
BYTE UNIX Benchmarks (Version 5.1.3)
System: suishin: GNU/Linux
OS: GNU/Linux -- 2.6.32-279.el6.x86_64 -- #1 SMP Fri Jun 22 12:19:21 UTC 2012
Machine: x86_64 (x86_64)
Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
CPU 0: QEMU Virtual CPU version (cpu64-rhel6) (3990.4 bogomips)
x86-64, MMX, Physical Address Ext, SYSCALL/SYSRET
15:47:15 up 1 day, 21:12, 1 user, load average: 0.60, 0.21, 0.06; runlevel 3
------------------------------------------------------------------------
Benchmark Run: Fri Jun 06 2014 15:47:15 - 16:15:37
1 CPU in system; running 1 parallel copy of tests
Dhrystone 2 using register variables 28617114.2 lps (10.0 s, 7 samples)
Double-Precision Whetstone 3017.1 MWIPS (9.7 s, 7 samples)
Execl Throughput 3089.6 lps (29.9 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 653774.6 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 192359.8 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 1755114.1 KBps (30.0 s, 2 samples)
Pipe Throughput 1331340.7 lps (10.0 s, 7 samples)
Pipe-based Context Switching 235541.7 lps (10.0 s, 7 samples)
Process Creation 8824.3 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 4037.8 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 563.4 lpm (60.1 s, 2 samples)
System Call Overhead 2082605.7 lps (10.0 s, 7 samples)
System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 28617114.2 2452.2
Double-Precision Whetstone 55.0 3017.1 548.6
Execl Throughput 43.0 3089.6 718.5
File Copy 1024 bufsize 2000 maxblocks 3960.0 653774.6 1650.9
File Copy 256 bufsize 500 maxblocks 1655.0 192359.8 1162.3
File Copy 4096 bufsize 8000 maxblocks 5800.0 1755114.1 3026.1
Pipe Throughput 12440.0 1331340.7 1070.2
Pipe-based Context Switching 4000.0 235541.7 588.9
Process Creation 126.0 8824.3 700.3
Shell Scripts (1 concurrent) 42.4 4037.8 952.3
Shell Scripts (8 concurrent) 6.0 563.4 939.0
System Call Overhead 15000.0 2082605.7 1388.4
========
System Benchmarks Index Score 1098.1
ちょっと低いかな・・・。
ただ、AWSと比べるとかなり性能が良い感じですね。
以下結果。
========================================================================
BYTE UNIX Benchmarks (Version 5.1.3)
System: suishin: GNU/Linux
OS: GNU/Linux -- 2.6.32-279.el6.x86_64 -- #1 SMP Fri Jun 22 12:19:21 UTC 2012
Machine: x86_64 (x86_64)
Language: en_US.utf8 (charmap="UTF-8", collate="UTF-8")
CPU 0: QEMU Virtual CPU version (cpu64-rhel6) (3990.4 bogomips)
x86-64, MMX, Physical Address Ext, SYSCALL/SYSRET
15:47:15 up 1 day, 21:12, 1 user, load average: 0.60, 0.21, 0.06; runlevel 3
------------------------------------------------------------------------
Benchmark Run: Fri Jun 06 2014 15:47:15 - 16:15:37
1 CPU in system; running 1 parallel copy of tests
Dhrystone 2 using register variables 28617114.2 lps (10.0 s, 7 samples)
Double-Precision Whetstone 3017.1 MWIPS (9.7 s, 7 samples)
Execl Throughput 3089.6 lps (29.9 s, 2 samples)
File Copy 1024 bufsize 2000 maxblocks 653774.6 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 192359.8 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 1755114.1 KBps (30.0 s, 2 samples)
Pipe Throughput 1331340.7 lps (10.0 s, 7 samples)
Pipe-based Context Switching 235541.7 lps (10.0 s, 7 samples)
Process Creation 8824.3 lps (30.0 s, 2 samples)
Shell Scripts (1 concurrent) 4037.8 lpm (60.0 s, 2 samples)
Shell Scripts (8 concurrent) 563.4 lpm (60.1 s, 2 samples)
System Call Overhead 2082605.7 lps (10.0 s, 7 samples)
System Benchmarks Index Values BASELINE RESULT INDEX
Dhrystone 2 using register variables 116700.0 28617114.2 2452.2
Double-Precision Whetstone 55.0 3017.1 548.6
Execl Throughput 43.0 3089.6 718.5
File Copy 1024 bufsize 2000 maxblocks 3960.0 653774.6 1650.9
File Copy 256 bufsize 500 maxblocks 1655.0 192359.8 1162.3
File Copy 4096 bufsize 8000 maxblocks 5800.0 1755114.1 3026.1
Pipe Throughput 12440.0 1331340.7 1070.2
Pipe-based Context Switching 4000.0 235541.7 588.9
Process Creation 126.0 8824.3 700.3
Shell Scripts (1 concurrent) 42.4 4037.8 952.3
Shell Scripts (8 concurrent) 6.0 563.4 939.0
System Call Overhead 15000.0 2082605.7 1388.4
========
System Benchmarks Index Score 1098.1
ちょっと低いかな・・・。
ただ、AWSと比べるとかなり性能が良い感じですね。
登録:
投稿 (Atom)