android.hardware.camera2 提供了访问android device上camera devices的接口
The android.hardware.camera2 package provides an interface to
individual camera devices connected to an Android device. It replaces
the deprecated {android.hardware.Camera}
android.hardware.camera2把camera device抽象为管道:接收request, 处理request, 输出结果
This package models a camera device as a pipeline, which
takes in input requests for capturing a single frame,
captures the single image per the request, and then
outputs one capture result metadata packet, plus a set of output image buffers for the request.
The requests are processed in-order, and multiple requests can be in flight at
once. Since the camera device is a pipeline with multiple stages,
having multiple requests in flight is required to maintain full framerate on most Android devices.
通过 android.hardware.camera2.CameraManager 枚举、查询、打开 camera devices
To enumerate, query, and open available camera devices, obtain a
{android.hardware.camera2.CameraManager} instance.
CameraCharacteristics描述 hardware device的静态属性,可用设置和输出参数
Individual {android.hardware.camera2.CameraDevice
CameraDevices} provide a set of static property information that
describes the hardware device and the available settings and output
parameters for the device. This information is provided through the
{android.hardware.camera2.CameraCharacteristics} object, and is
available through { android.hardware.camera2.CameraManager#getCameraCharacteristics}
拍照或视频流:创建android.hardware.camera2.CameraCaptureSession 和 output Surfaces
To capture or stream images from a camera device, the application
must first create a {android.hardware.camera2.CameraCaptureSession camera capture session}
with a set of output Surfaces for use with the camera device, with
{android.hardware.camera2.CameraDevice#createCaptureSession}.
每个surface 提前配置size and format, 根据StreamConfigurationMap: (surface的参数: size and format)
Each Surface has to be pre-configured with an {android.hardware.camera2.params.StreamConfigurationMap appropriate
size and format} (if applicable) to match the sizes and formats available from the camera device.
获得surface的方法:
A target Surface can be obtained from a variety of classes, including
{android.view.SurfaceView},
{android.graphics.SurfaceTexture} via
{android.view.Surface#Surface(SurfaceTexture)},
{android.media.MediaCodec},
{android.media.MediaRecorder},
{android.renderscript.Allocation}, and
{android.media.ImageReader}.
一般而言,preview images发送到android.view.SurfaceView或者android.view.TextureView
Generally, camera preview images are sent to {android.view.SurfaceView} or
{android.view.TextureView} (via its {android.graphics.SurfaceTexture}).
Capture of JPEG images or RAW buffers:
Capture of JPEG images or RAW buffers for {android.hardware.camera2.DngCreator} can be
done with {android.media.ImageReader} with the {android.graphics.ImageFormat#JPEG} and {
android.graphics.ImageFormat#RAW_SENSOR} formats.
Application-driven processing of camera data in Renderscript, OpenGL ES, or directly in
managed or native code is best done through {android.renderscript.Allocation} with a YUV
{android.renderscript.Type}, {android.graphics.SurfaceTexture},
and {android.media.ImageReader} with a {android.graphics.ImageFormat#YUV_420_888} format, respectively.
应用需要构建一个CaptureRequest, CaptureRequest定义了需要的参数和 输出surface
The application then needs to construct a {android.hardware.camera2.CaptureRequest},
which defines all the capture parameters needed by a camera device to capture a single
image. The request also lists which of the configured output Surfaces should be used as targets for this capture.
CaptureRequest.Builder: 构建CaptureRequest的方法 (CaptureRequest 参数)
The CameraDevice has a{android.hardware.camera2.CameraDevice#createCaptureRequest
factory method} for creating a {android.hardware.camera2.CaptureRequest.Builder request builder} for a
given use case, which is optimized for the Android device the
application is running on.
capture session处理request
once the request has been set up, it can be handed to the active
capture session either for a one-shot {android.hardware.camera2.CameraCaptureSession#capture capture}
or for an endlessly { android.hardware.camera2.CameraCaptureSession#setRepeatingRequest
repeating} use.
Both methods also have a variant that accepts a list
of requests to use as a burst capture/repeating burst. Repeating
requests have a lower priority than captures.
处理请求后,camera device 生成一个TotalCaptureResult object其中包含状态信息
After processing a request, the camera device will produce a {
android.hardware.camera2.TotalCaptureResult} object, which contains
information about the state of the camera device at time of capture,
and the final settings used. These may vary somewhat from the request,
if rounding or resolving contradictory parameters was necessary.
camera device 发送一帧image data 到Surfaces
The camera device will also send a frame of image data into each of the
output {Surfaces} included in the request. These are produced
asynchronously relative to the output CaptureResult, sometimes
substantially later.
A configured capture session for a {CameraDevice}, used for capturing images from the camera.
A CameraCaptureSession is created by providing a set of target output surfaces to
{CameraDevice#createCaptureSession createCaptureSession}.
once created, the session is active until a new session is created by the camera device, or the camera device is closed.
//为什么实现为 callback(listener)是异步的
* Creating a session is an expensive operation and can take several hundred milliseconds, since
* it requires configuring the camera device's internal pipelines and allocating memory buffers for
* sending images to the desired targets. Therefore the setup is done asynchronously, and
* {createCaptureSession} and
* {createReprocessableCaptureSession} will
* send the ready-to-use CameraCaptureSession to the provided listener's
* {CameraCaptureSession.StateCallback#onConfigured onConfigured} callback. If configuration
* cannot be completed, then the
* {CameraCaptureSession.StateCallback#onConfigureFailed onConfigureFailed} is called, and the
* session will not become active.
//capture request和session的关系,另外 capture request分为两类(repeating or non-repeating)?
* Any capture requests (repeating or non-repeating) submitted before the session is ready will
* be queued up and will begin capture once the session becomes ready. In case the session cannot be
* configured and {onConfigureFailed} is called, all queued
* capture requests are discarded.
// 只能有一个session? 如果新建一个session,之前的session自动 close?, 是的
* If a new session is created by the camera device, then the previous session is closed, and its
* associated {@link StateCallback#onClosed onClosed} callback will be invoked. All
* of the session methods will throw an IllegalStateException if called once the session is
* closed.
* A closed session clears any repeating requests (as if {stopRepeating} had been called),
* but will still complete all of its in-progress capture requests as normal, before a newly
* created session takes over and reconfigures the camera device.
public abstract class CameraCaptureSession implements AutoCloseable {
public abstract int capture(@NonNull CaptureRequest request,
@Nullable CaptureCallback listener, @Nullable Handler handler)
throws CameraAccessException;
@Override
public abstract void close();
}
Camerametadata
public abstract class Camerametadata
}
CaptureRequestpublic final class CaptureRequest extends Camerametadata
implements Parcelable {
public final static class Builder {
private final CaptureRequest mRequest;
public Builder(CamerametadataNative template, boolean reprocess,
int reprocessableSessionId, String logicalCameraId,
Set
mRequest = new CaptureRequest(template, reprocess, reprocessableSessionId,
logicalCameraId, physicalCameraIdSet);
}
public void addTarget(@NonNull Surface outputTarget) {
mRequest.mSurfaceSet.add(outputTarget);
}
}
}
public class CaptureResult extends Camerametadata
}
Camera2 相关callbacksCamera2 api 是基于pipline的,是多阶段的,是异步的,是基于callback的
1. open Camera Device
2. 当 Camera Device打开时创建 CaptureSession
3. 当CaptureSession 创建成功时调用onConfigured, 可以处理capture request了
4. capture request 被处理后得到一帧帧数据 imageReader的 onImageAvailable被调用,开始进行image 数据处理和后面的模型推断等。
一个基于camera2 APP的代码分析
public class Camera2BasicFragment extends Fragment
implements View.OnClickListener, FragmentCompat.onRequestPermissionsResultCallback {
@Override
public void onViewCreated(final View view, Bundle savedInstanceState) {
view.findViewById(R.id.picture).setonClickListener(this);
view.findViewById(R.id.info).setonClickListener(this);
//创建 AutoFitTextureView: mTextureView for camera preview
mTextureView = (AutoFitTextureView) view.findViewById(R.id.texture);
}
@Override
public void onResume() {
super.onResume();
startBackgroundThread();
// When the screen is turned off and turned back on, the SurfaceTexture is already
// available, and "onSurfaceTextureAvailable" will not be called. In that case, we can open
// a camera and start preview from here (otherwise, we wait until the surface is ready in
// the SurfaceTextureListener).
if (mTextureView.isAvailable()) { //如果 view 可用则 openCamera开始 preview
openCamera(mTextureView.getWidth(), mTextureView.getHeight());
} else { // 如果当前view还不可用,在注册listener等待 view 可用
mTextureView.setSurfaceTextureListener(mSurfaceTextureListener);
}
}
}
private final TextureView.SurfaceTextureListener mSurfaceTextureListener
= new TextureView.SurfaceTextureListener() {
@Override
public void onSurfaceTextureAvailable(SurfaceTexture texture, int width, int height) {
openCamera(width, height);
}
}
当用于preview的 view 可用后调用openCamera(width, height)
private void openCamera(int width, int height) {
if (ContextCompat.checkSelfPermission(getActivity(), Manifest.permission.CAMERA)
!= PackageManager.PERMISSION_GRANTED) {
requestCameraPermission();
return;
}
setUpCameraOutputs(width, height);
configureTransform(width, height);
Activity activity = getActivity();
CameraManager manager = (CameraManager) activity.getSystemService(Context.CAMERA_SERVICE);
try {
if (!mCameraOpenCloseLock.tryAcquire(2500, TimeUnit.MILLISECONDS)) {
throw new RuntimeException("Time out waiting to lock camera opening.");
}
manager.openCamera(mCameraId, mStateCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
} catch (InterruptedException e) {
throw new RuntimeException("Interrupted while trying to lock camera opening.", e);
}
}
// 赋值camera相关成员变量:
// mImageReader, mSensorOrientation, mPreviewSize, mFlashSupported
// mImageReader的listener
private void setUpCameraOutputs(int width, int height) {
for (String cameraId : manager.getCameraIdList()) {
CameraCharacteristics characteristics
= manager.getCameraCharacteristics(cameraId);
// We don't use a front facing camera in this sample.
Integer facing = characteristics.get(CameraCharacteristics.LENS_FACING);
if (facing != null && facing == CameraCharacteristics.LENS_FACING_FRONT) {
continue;
}
StreamConfigurationMap map = characteristics.get(
CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
if (map == null) {
continue;
}
mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(),
ImageFormat.JPEG, 2);
mImageReader.setonImageAvailableListener(
mOnImageAvailableListener, mBackgroundHandler);
// Find out if we need to swap dimension to get the preview size relative to sensor
// coordinate.
int displayRotation = activity.getWindowManager().getDefaultDisplay().getRotation();
//noinspection ConstantConditions
mSensorOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
// Danger, W.R.! Attempting to use too large a preview size could exceed the camera
// bus' bandwidth limitation, resulting in gorgeous previews but the storage of
// garbage capture data.
mPreviewSize = chooseOptimalSize(map.getOutputSizes(SurfaceTexture.class),
rotatedPreviewWidth, rotatedPreviewHeight, maxPreviewWidth,
maxPreviewHeight, largest);
// Check if the flash is supported.
Boolean available = characteristics.get(CameraCharacteristics.FLASH_INFO_AVAILABLE);
mFlashSupported = available == null ? false : available;
mCameraId = cameraId;
return;
}
}
private void configureTransform(int viewWidth, int viewHeight) {
Activity activity = getActivity();
if (null == mTextureView || null == mPreviewSize || null == activity) {
return;
}
int rotation = activity.getWindowManager().getDefaultDisplay().getRotation();
Matrix matrix = new Matrix();
RectF viewRect = new RectF(0, 0, viewWidth, viewHeight);
RectF bufferRect = new RectF(0, 0, mPreviewSize.getHeight(), mPreviewSize.getWidth());
float centerX = viewRect.centerX();
float centerY = viewRect.centerY();
if (Surface.ROTATION_90 == rotation || Surface.ROTATION_270 == rotation) {
bufferRect.offset(centerX - bufferRect.centerX(), centerY - bufferRect.centerY());
matrix.setRectToRect(viewRect, bufferRect, Matrix.ScaleToFit.FILL);
float scale = Math.max(
(float) viewHeight / mPreviewSize.getHeight(),
(float) viewWidth / mPreviewSize.getWidth());
matrix.postScale(scale, scale, centerX, centerY);
matrix.postRotate(90 * (rotation - 2), centerX, centerY);
} else if (Surface.ROTATION_180 == rotation) {
matrix.postRotate(180, centerX, centerY);
}
mTextureView.setTransform(matrix);
}
manager.openCamera(mCameraId, mStateCallback, mBackgroundHandler);
private final CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback() {
@Override
public void onOpened(@NonNull CameraDevice cameraDevice) {
// This method is called when the camera is opened.
// We start camera preview here.
mCameraOpenCloseLock.release();
mCameraDevice = cameraDevice;
createCameraPreviewSession();
}
@Override
public void onDisconnected(@NonNull CameraDevice cameraDevice) {
mCameraOpenCloseLock.release();
cameraDevice.close();
mCameraDevice = null;
}
@Override
public void onError(@NonNull CameraDevice cameraDevice, int error) {
mCameraOpenCloseLock.release();
cameraDevice.close();
mCameraDevice = null;
Activity activity = getActivity();
if (null != activity) {
activity.finish();
}
}
};
// 当openCamera后,回调 onOpened(), 其中调用 createCameraPreviewSession();
private void createCameraPreviewSession() {
try {
SurfaceTexture texture = mTextureView.getSurfaceTexture();
assert texture != null;
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(mPreviewSize.getWidth(), mPreviewSize.getHeight());
// This is the output Surface we need to start preview.
Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
mPreviewRequestBuilder
= mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
mPreviewRequestBuilder.addTarget(surface);
// Here, we create a CameraCaptureSession for camera preview.
// 通过 cameraDevice创建CaptureSession, (关联surface和mImageReader), session创建成功后
// 在session创建成功后,创建 previewRequest, mCaptureSession.setRepeatingRequest 开始preview
mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == mCameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
mCaptureSession = cameraCaptureSession;
try {
// Auto focus should be continuous for camera preview.
mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
setAutoFlash(mPreviewRequestBuilder);
// Finally, we start displaying the camera preview.
mPreviewRequest = mPreviewRequestBuilder.build();
mCaptureSession.setRepeatingRequest(mPreviewRequest,
mCaptureCallback, mBackgroundHandler);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
@Override
public void onConfigureFailed(
@NonNull CameraCaptureSession cameraCaptureSession) {
showToast("Failed");
}
}, null
);
} catch (CameraAccessException e) {
e.printStackTrace();
}
}
最后到了,view里Image可用
private final ImageReader.onImageAvailableListener mOnImageAvailableListener
= new ImageReader.onImageAvailableListener() {
@Override
public void onImageAvailable(ImageReader reader) {
mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
}
};
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)