我试图理解这个命令的精度:据我所知它返回以ms为单位的精度,但它使用getrusage()函数返回一个以微秒为单位的值.但是读取this paper,真正的精度只有10ms,因为getrusage依靠滴答(= 100Hz)来抽样.这篇论文真的很老了(它提到linux 2.2.14在Pentium 166Mhz上运行,内存为96Mb).
时间是否还在使用getrusage()和100 Hz滴答或在现代系统上更精确?
测试机器正在运行linux 2.6.32.
编辑:这是一个略微修改的版本(也应该在老版本的GCC上编译)的muru代码:修改变量’v’的值也改变度量之间的延迟以便发现最小粒度.大约500,000的值应该在相对较新的cpu上触发1ms的更改(第一版i5 / i7 @~2.5Ghz)
#include <sys/time.h>#include <sys/resource.h>#include <stdio.h>voID dosomething(){ long v = 1000000; while (v > 0) v--;}int main(){ struct rusage r1,r2; long t1,t2,min,max; int i; printf("t1\tt2\tdiff\n"); for (i = 0; i<5; i++){ getrusage(RUSAGE_SELF,&r1); dosomething(); getrusage(RUSAGE_SELF,&r2); t1 = r1.ru_stime.tv_usec + r1.ru_stime.tv_sec*1000000 + r1.ru_utime.tv_usec + r1.ru_utime.tv_sec*1000000; t2 = r2.ru_stime.tv_usec + r2.ru_stime.tv_sec*1000000 + r2.ru_utime.tv_usec + r2.ru_utime.tv_sec*1000000; printf("%ld\t%ld\t%ld\n",t1,t2-t1); if ((t2-t1 < min) | (i == 0)) min = t2-t1; if ((t2-t1 > max) | (i == 0)) max = t2-t1; dosomething(); } printf("Min = %ldus Max = %ldus\n",max); return 0;}
但是精度与linux版本绑定:linux 3及以上版本的精度是我们的顺序,而在linux 2.6.32上可能是1ms左右,可能还取决于具体的发行版.我想这个差异与HRT的使用有关,而不是最近的linux版本上的Tick.
在任何情况下,所有最近和不是最近的机器上的最大时间精度是1ms.
解决方法 bash内置时间仍然使用getrusage(2).在Ubuntu 14.04系统上:$bash --versionGNU bash,version 4.3.11(1)-release (x86_64-pc-linux-gnu)copyright (C) 2013 Free Software Foundation,Inc.license GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.HTML>This is free software; you are free to change and redistribute it.There is NO WARRANTY,to the extent permitted by law.$strace -o log bash -c 'time sleep 1'real 0m1.018suser 0m0.000ssys 0m0.001s$tail loggetrusage(RUSAGE_SELF,{ru_utime={0,0},ru_stime={0,3242},...}) = 0getrusage(RUSAGE_CHILDREN,530},...}) = 0write(2,"\n",1) = 1write(2,"real\t0m1.018s\n",14) = 14write(2,"user\t0m0.000s\n","sys\t0m0.001s\n",13) = 13rt_sigprocmask(SIG_BLOCK,[CHLD],[],8) = 0rt_sigprocmask(SIG_SETMASK,NulL,8) = 0exit_group(0) = ?+++ exited with 0 +++
正如strace输出所示,它称为getrusage.
至于精度,getrusage使用的rusage结构包括timeval对象,timeval具有微秒精度.从manpage of getrusage
:
ru_utime This is the total amount of time spent executing in user mode,expressed in a timeval structure (seconds plus microseconds).ru_stime This is the total amount of time spent executing in kernel mode,expressed in a timeval structure (seconds plus microseconds).
我认为它比10毫秒更好.以下示例文件:
#include <sys/time.h>#include <sys/resource.h>#include <stdio.h>int main(){ struct rusage Now,then; getrusage(RUSAGE_SELF,&then); getrusage(RUSAGE_SELF,&Now); printf("%ld %ld\n",then.ru_stime.tv_usec + then.ru_stime.tv_sec*1000000 + then.ru_utime.tv_usec + then.ru_utime.tv_sec*1000000 Now.ru_stime.tv_usec + Now.ru_stime.tv_sec*1000000 + Now.ru_utime.tv_usec + Now.ru_utime.tv_sec*1000000);}
现在:
$make testcc test.c -o test$for ((i=0; i < 5; i++)); do ./test; done447 448356 356348 348347 347349 350
连续调用getrusage之间报告的差异是1μs和0(最小值).由于它确实显示1μs间隙,因此滴答必须至多为1μs.
如果它有10毫秒的滴答,差异将为零,或至少为10000.
总结以上是内存溢出为你收集整理的linux – bash内置时间命令的精度是多少?全部内容,希望文章能够帮你解决linux – bash内置时间命令的精度是多少?所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)