coredump导出堆内存

coredump导出堆内存,第1张

系统中,应用程序崩溃,一般会产生core文件,如何根据core文件查找问题的所在,并做相应的分析和调试,是非常重要的。

什么是Core Dump?

Core的意思早碰是内存, Dump的意思是扔出来, 堆出来.开发和使用Unix程序时, 有时程序莫名其妙的down了, 却没有任何的提示(有时候会提示core dumped). 这时候可以查看一下有没有形如core.进程号的文件生成, 这个文件便是 *** 作系统把程序down掉时的内存内容扔出来生成喊凳的, 它可以做为调试程序的参考.

core dump又叫核心转储, 当程序运行过程中发生异常, 程序异常退出时, 由 *** 作系统把程序当前的内存状况存储在一个core文件中, 叫core dump.

COREDUMP是由系统全局设置和程序设置后才能生成的,当发生异常时,由内核将程序映像转储,和GCC和GDB没有直接关系,GCC中加-g参数是为了获得符号表,便于GDB分析,一般只包含一个线程的TRACE,COREDUMP所反馈的信息也不是完全准确的,它只是程序宕机前的一个映像(主要是调用堆栈及全局量),如果程序跑飞了那参考价值就不大了。

为什么没有core文件生成呢?

有时候程序down了, 但是core文件却没有生成. core文件的生成跟你当前系统的环境设置有关系, 可以用下面的语句设置一下, 然后再运行程序便成郑睁旅生成core文件.

ulimit -c unlimited(将coredump文件设置位无限制)

可用ulimit -a命令查看系统限制,如果此时的core file size (blocks, -c) 0则不会产生core文件。

core文件生成的位置一般于运行程序的路径相同, 文件名一般为core.进程号

当获得了core文件以后,就可以利用命令gdb进行查找,参数一是应用程序的名称,参数二是core文件

如: gdb [...]xmsd [...]/xmsd_PID1065_SIG11.core

然后输入bt或者where找到错误发生的位置和相应的堆栈信息。就可知道发生错误时的函数调用关系,然后可以使用up或者down查看上一条和下一条具体详细信息。这样便能对问题进行大概定位,然后看源代码,进行分析。

例程:

$vi foo.c

编辑如下:

#include <stdio.h>

static void sub(void)

int main(void)

{

sub()

return 0

}

static void sub(void)

{

int *p = NULL

/* derefernce a null pointer, expect core dump. */

printf("%d", *p)

}

$more foo.c //查看代码

$gcc -Wall -g foo.c

$./a.out

段错误

当core file size (blocks, -c) 0时,不会有core文件生成,但是我们已经设置位unlimited,所以在ls查看的时候:

a.out core foo.c

然后使用GDB进行解析

$gdb --core=core

GNU gdb (GDB) 7.1-ubuntu

Copyright (C) 2010 Free Software Foundation, Inc.

License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>

This is free software: you are free to change and redistribute it.

There is NO WARRANTY, to the extent permitted by law. Type "show copying"

and "show warranty" for details.

This GDB was configured as "x86_64-linux-gnu".

For bug reporting instructions, please see:

<http://www.gnu.org/software/gdb/bugs/>.

[New Thread 14364]

Core was generated by `./a.out'.

Program terminated with signal 11, Segmentation fault.

#0 0x0000000000400548 in ?? ()

(gdb)bt

#0 0x0000000000400548 in ?? ()

#1 0x0000000000000000 in ?? ()

(gdb)file ./a.out

Reading symbols from /local/test_gdb/a.out...done.

(gdb) bt

#0 0x0000000000400548 in sub () at foo.c:13

#1 0x000000000040052d in main () at foo.c:6

(gdb) l

1#include<stdio.h>

2static void sub(void)

3

4int main(void)

5{

6sub()

7return 0

8}

9

10static void sub(void)

(gdb) l

11{

12int *p = NULL

13printf("%d",*p)

14}

通过上面就可以看到问题出在什么地方。

点击阅读全文

经过分析发现系统默认的core文件生成路径是/var/logs,但/var/logs目录并非系统自带的,系统初始安装默认自带的是/var/log,最终导致该系统出现core dump后并没能生成core文件,因此如何查询和修改系统默认的core dump文件生产路径呢?方法如下:一. 查询core dump文件路径: 方法1: # cat /proc/sys/kerne怠珐糙敏指貉孬股敏神茬瘫长凯l/core_pattern方法2: # /sbin/sysctl kernel.core_pattern二. 修改core dump文件路径: 方法1:临时修改:修改/proc/sys/kernel/core_pattern文件,但/proc目录本身是动态加载的,每次系统重启都会重新加载,因此这种方法只能作为临时桥拿亏修改。 /proc/sys/kernel/core_pattern 例:echo ‘/var/log/%e.core.%p’ >/proc/sys/kernel/core_pattern方法2:永久修改:使用sysctl -w name=value命令。 例:/sbin/sysctl -w kernel.core_pattern=/var/log/%e.core.%p为了更详尽的记录core dump当时的系统状态,可通过以下参数来丰富core文件的命名: %% 单个%字符

CHECK THE MESSAGE "(core dumped)" and the generation of the 'core' file.

If you do not see the message or the core file on crash,

please run the following command again, and also check the current working

directory.

This command will open a gdb session at the crash point. The coredump file

contains a system's status at the crash point, which includes memory,

register, type of signal on crash (mostly SIGSEGV), etc.

You cannot execute (e.g., using r, ni, si) because the execution was

terminated, but you can still check the memory.

Core was generated by

Program terminated with signal SIGSEGV, Segmentation fault.

This output means that the coredump is generated by './frame-pointer-32',

and the program crashed at 0x8048635 with the signal SIGSEGV.

You can see that the program uses the stack address around 0xffffd5c0,

and the frame pointer (%ebp) has been changed to 0x41414141.

And to check the cause of the crash, let's check what is the current instruction.

x/i means that "examine instruction", and the eip, actually).

Because the value of %ebp is 0x41414141, which is an invalid address,

SIGSEGV was raised on running leave instruction because it will change

%esp = 0x41414141 then run "pop %ebp" (dereferencing 0x41414141).

You already know that this challenge is about changing the frame pointer (%ebp)

to control the execution (return address) at the end.

The important part of launching such attack is to know where the start of my

input buffer is on the stack.

Some of you are suffering the problem that your attack works on the samples

binary but does not work with challenges binarythis is because the stack

layout could be different on each program execution, so the starting address

of your buffer could be different in challenge binary than the sample binary.

To get the exact address in the challenge binary, you can get the 'corefile'

of the challenge binary by running it and generating the crash because

corefile contains the exact content at the crash point. In other words,

if you get the address of your buffer in the corefile, the same address

was being used on its execution!

Let's check how to get that in the corefile.

You can start with the current stack (%esp), however, there are two function

calls (non_main_func() ->input_func()). While your input resides in the

local stack of input_func(), your current execution is on the non_main_func()

(frame-pointer attack returns twice, and crash by invalid %ebp requires

two returns to generate SIGSEGV).

We can presume that stack has moved up little bit, because there is the code for

leaveret, which rolls back the local stack of input_func().

So let's examine the stack values from somewhere below the stack (%esp)

x/100x $esp - 0x200 this command will display upto 100 4-byte integer values

from the address %esp-0x200 (the location 0x200 before the espyou can use +

to see upwards).

While we were injecting the input of "A"*200, which will be lots of 0x41414141,

I cannot see that on the stack. Let's move upwards (pressing the enter again).

Ah, now I can see several 0x41414141s .

It seems the buffer starts at 0xffffd560 (for the samples/frame-pointer-32) binary.

Then, what if I run the program in the challenges directory?

Let's check with the core.

-- GDB --

Core was generated by `../challenges/frame-pointer/frame-pointer-32'.

Program terminated with signal SIGSEGV, Segmentation fault.

Ah, now you can see that the address starts at 0xffffd510 (contains 0x41414141).

Previously, it was 0xffffd560 but now it is 0xffffd510 (changed!).

In a subsequent running of the challenge program, this address value will be

the same if your running command is the same

(i.e., runs ../challenges/frame-pointer/frame-pointer-32,

not in the different path or something).

A shortcut to have a correct address value is that, when you are using

pwntools (writing a python script), the program execution within pwntools

will also generate a corefile to your directory.


欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/tougao/12216321.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-05-21
下一篇 2023-05-21

发表评论

登录后才能评论

评论列表(0条)

保存