我需要使用相机预览数据进行一些实时图像处理,例如人脸检测(它是一个c库),然后在屏幕上显示带有人脸的预览预览.
我已经从Android camera2 API – Display processed frame in real time中阅读了http://nezarobot.blogspot.com/2016/03/android-surfacetexture-camera2-opencv.html和Eddy Talvala的答案.在两个网页之后,我设法构建了该应用程序(不调用面部检测库,仅尝试使用ANativeWindow显示预览),但是每次我在Google Pixel上运行此应用程序-7.1 .0-在Genymotion上运行的API 25,该应用程序始终崩溃,并抛出以下日志
08-28 14:23:09.598 2099-2127/tau.camera2demo A/libc: Fatal signal 11 (SIGSEGV), code 2, fault addr 0xd3a96000 in tID 2127 (CAMERA2) [ 08-28 14:23:09.599 117: 117 W/ ] deBUGgerd: handling request: pID=2099 uID=10067 gID=10067 tID=2127
我用谷歌搜索,但没有找到答案.
Github上的整个项目:https://github.com/Fung-yuantao/android-camera2demo
这是关键代码(我认为).
Camera2Demo.java中的代码:
private voID startPrevIEw(CameraDevice camera) throws CameraAccessException { SurfaceTexture texture = mPrevIEwVIEw.getSurfaceTexture(); // to set PREVIEW size texture.setDefaultBufferSize(mPrevIEwSize.getWIDth(),mPrevIEwSize.getHeight()); surface = new Surface(texture); try { // to set request for PREVIEW mPreviewbuilder = camera.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW); } catch (CameraAccessException e) { e.printstacktrace(); } mImageReader = ImageReader.newInstance(mImageWIDth, mImageHeight, ImageFormat.YUV_420_888, 2); mImageReader.setonImageAvailableListener(mOnImageAvailableListener,mHandler); mPreviewbuilder.addTarget(mImageReader.getSurface()); //output Surface List<Surface> outputSurfaces = new ArrayList<>(); outputSurfaces.add(mImageReader.getSurface()); /*camera.createCaptureSession( Arrays.asList(surface, mImageReader.getSurface()), mSessionStateCallback, mHandler); */ camera.createCaptureSession(outputSurfaces, mSessionStateCallback, mHandler);}private CameraCaptureSession.StateCallback mSessionStateCallback = new CameraCaptureSession.StateCallback() { @OverrIDe public voID onConfigured(CameraCaptureSession session) { try { updatePrevIEw(session); } catch (CameraAccessException e) { e.printstacktrace(); } } @OverrIDe public voID onConfigureFailed(CameraCaptureSession session) { }};private voID updatePrevIEw(CameraCaptureSession session) throws CameraAccessException { mPreviewbuilder.set(CaptureRequest.CONTRol_AF_MODE, CaptureRequest.CONTRol_AF_MODE_auto); session.setRepeatingRequest(mPreviewbuilder.build(), null, mHandler);}private ImageReader.OnImageAvailableListener mOnImageAvailableListener = new ImageReader.OnImageAvailableListener() { @OverrIDe public voID onImageAvailable(ImageReader reader) { // get the newest frame Image image = reader.acquireNextimage(); if (image == null) { return; } // print image format int format = reader.getimageFormat(); Log.d(TAG, "the format of captured frame: " + format); // HERE to call jni methods JNIUtils.display(image.getWIDth(), image.getHeight(), image.getPlanes()[0].getBuffer(), surface); //ByteBuffer buffer = image.getPlanes()[0].getBuffer(); //byte[] bytes = new byte[buffer.remaining()]; image.close(); }};
JNIUtils.java中的代码:
import androID.media.Image;import androID.vIEw.Surface;import java.nio.ByteBuffer;public class JNIUtils { // TAG for JNIUtils class private static final String TAG = "JNIUtils"; // Load native library. static { System.loadlibrary("native-lib"); } public static native voID display(int srcWIDth, int srcHeight, ByteBuffer srcBuffer, Surface surface);}
native-lib.cpp中的代码:
#include <jni.h>#include <string>#include <androID/log.h>//#include <androID/bitmap.h>#include <androID/native_window_jni.h>#define LOGE(...) __androID_log_print(ANDROID_LOG_ERROR, "Camera2Demo", __VA_ARGS__)extern "C" {JNIEXPORT Jstring JNICALL Java_tau_camera2demo_JNIUtils_display( jnienv *env, jobject obj, jint srcWIDth, jint srcHeight, jobject srcBuffer, jobject surface) { /* uint8_t *srcLumaPtr = reinterpret_cast<uint8_t *>(env->GetDirectBufferAddress(srcBuffer)); if (srcLumaPtr == nullptr) { LOGE("srcLumaPtr null ERROR!"); return NulL; } */ ANativeWindow * window = ANativeWindow_fromSurface(env, surface); ANativeWindow_acquire(window); ANativeWindow_Buffer buffer; ANativeWindow_setBuffersGeometry(window, srcWIDth, srcHeight, 0/* format unchanged */); if (int32_t err = ANativeWindow_lock(window, &buffer, NulL)) { LOGE("ANativeWindow_lock Failed with error code: %d\n", err); ANativeWindow_release(window); return NulL; } memcpy(buffer.bits, srcBuffer, srcWIDth * srcHeight * 4); ANativeWindow_unlockAndPost(window); ANativeWindow_release(window); return NulL;}}
在我将memcpy注释掉之后,该应用程序不再崩溃,但什么也不显示.因此,我想问题现在转向如何正确使用memcpy将捕获/处理的缓冲区复制到buffer.bits.
更新:
我改变
memcpy(buffer.bits, srcBuffer, srcWIDth * srcHeight * 4);
至
memcpy(buffer.bits, srcLumaPtr, srcWIDth * srcHeight * 4);
该应用程序不再崩溃并开始显示,但显示的是奇怪的东西.
解决方法:
如yakobom所述,您尝试将YUV_420_888图像直接复制到RGBA_8888目标(如果未更改,则为默认设置).仅仅使用memcpy是行不通的.
您实际上需要转换数据,并且需要确保不要复制太多-您拥有的示例代码复制了wIDth * height * 4个字节,而YUV_420_888图像仅占用了strIDe * height * 1.5个字节(大约) .因此,当您复制时,您正在缓冲区末尾运行.
您还必须考虑Java级别提供的跨度,以正确地索引到缓冲区. Microsoft的This link有一个有用的图表.
如果您只关心亮度(因此灰度输出就足够了),只需将亮度通道复制到R,G和B通道中即可.伪代码大致为:
uint8_t *outPtr = buffer.bits;for (size_t y = 0; y < height; y++) { uint8_t *rowPtr = srcLumaPtr + y * srcLumaStrIDe; for (size_t x = 0; x < wIDth; x++) { *(outPtr++) = *rowPtr; *(outPtr++) = *rowPtr; *(outPtr++) = *rowPtr; *(outPtr++) = 255; // gamma for RGBA_8888 ++rowPtr; }}
您将需要从Image对象(第一个Plane的行距)读取srcLumaStrIDe,并通过JNI向下传递.
总结以上是内存溢出为你收集整理的使用Android Camera2 API和ANativeWindow进行实时图像处理和显示全部内容,希望文章能够帮你解决使用Android Camera2 API和ANativeWindow进行实时图像处理和显示所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)