未處理的音視頻
處理音視頻加上時間
前言,對MediaCodec有一番認識後,試著對音視頻作後製處理,我試著對影片加上時間,再網路上搜尋一堆資料,有初步體認後,就動手試寫了,沒想到工程滿浩大的,還有很多坑,最後做出來了,但不是每部手機都能跑得完整,HTC Desire820跑OK,HTC e9 plus無法用codec.getOutputImage(index)取得一偵圖片,用ByteBuffer buffer = codec.getOutputBuffer(index)來處理,但是視頻有殘影,原因是在轉RGB時再轉回YUV420沒轉好,找不到方法,只能先這樣,實際上COLOR_FormatYUV420Flexible格式ByteBuffer就是YUV420,用Image image = codec.getOutputImage(index)還得多轉成YUV420位元組(byte[]),HTC Desire820用getInputBuffer來處理視頻花掉(應該只是轉換就會好,但是找不到資料先不管),用getOutputImager來處理就OK
首先是圖片繪製觀念,要在每一偵圖片作Canvas繪製,前提圖片需是BMP,解碼器視頻抽出一偵圖片Image image = codec.getOutputImage(index)是YUV420所以要轉到BMP,因為MediaCodec format需是COLOR_FormatYUV420Flexible格式,因此要轉成RGB後,壓成BMP才能用Canvas繪製,繪製完再轉回YUV420,才能餵給編碼器,讓合成器作音視頻合成,這轉換時間太耗時了,我用HTC Desire820處理一分鐘影片快要一個鐘頭,轉換流程如下,data是image轉成的YUV420位元組(byte[]),網路上有轉換程式(我列在下面的getDataFromImage就是轉換程式),然後用YuvImage壓成流,再decode成bmp,我們就可用canvas來作畫,這裡bmp添加時間,添完時間,就要再轉回YUV420這樣byte[]就可餵給MediaCodec,getNV21是轉換程式,網路上也找的到(我改寫列在下面),這裡強調的是流程,全部程式太多了就不全列出來,只列出轉換的程式
//data YUV420轉rgb,作渲染
ByteArrayOutputStream out = new ByteArrayOutputStream();
Rect rect = new Rect(0, 0, width, height);
//YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, rect.width(), rect.height(), null);
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, rect.width(), rect.height(), null);
yuvImage.compressToJpeg(rect, 100, out);
//RGB data
byte[] imagebytes = out.toByteArray();
Bitmap bmp = BitmapFactory.decodeByteArray(imagebytes, 0, imagebytes.length).copy(Bitmap.Config.ARGB_8888, true);
Paint paint = new Paint();
paint.setColor(Color.GREEN);
Canvas canvas = new Canvas(bmp);
paint.setStrokeWidth(15);
paint.setTextSize(50);
canvas.drawText(get_time(), 100, 150, paint);
//Log.d("TAG_T",""+get_time());
//canvas.drawText(videoTimeFormat.format(info.presentationTimeUs / 1000), 100, 150, paint);
//b=true save file
if (IS_SAVE) {compressToJpeg(i,bmp);}
//再轉回YUV420這個可給MediaCodec
byte[] encode_data = getNV21(width, height, bmp);
下面是image YUV420轉成 YUV420格式為NV21或I420 byte array(這是google找到的,這裡第2參數必需用colorFormat=COLOR_FormatNV21),Image image = codec.getOutputImage(index)就需透過下面轉成YUV 420 byte array,才能轉到RGB
////////////////////////////////////////YUV 420 byte array//////////////////////////////////////////////////////
private byte[] getDataFromImage(Image image, int colorFormat) {
if (colorFormat != COLOR_FormatI420 && colorFormat != COLOR_FormatNV21) {
throw new IllegalArgumentException("only support COLOR_FormatI420 " + "and COLOR_FormatNV21");
}
if (!isImageFormatSupported(image)) {
throw new RuntimeException("can't convert Image to byte array, format " + image.getFormat());
}
Rect crop = image.getCropRect();
int format = image.getFormat();
int width = crop.width();
int height = crop.height();
Image.Plane[] planes = image.getPlanes();
byte[] data = new byte[width * height * ImageFormat.getBitsPerPixel(format) / 8];
byte[] rowData = new byte[planes[0].getRowStride()];
if (VERBOSE)Log.d("TAG_L", "get data from " + planes.length + " planes");
int channelOffset = 0;
int outputStride = 1;
for (int i = 0; i < planes.length; i++) {
switch (i) {
case 0:
channelOffset = 0;
outputStride = 1;
break;
case 1:
if (colorFormat == COLOR_FormatI420) {
channelOffset = width * height;
outputStride = 1;
} else if (colorFormat == COLOR_FormatNV21) {
channelOffset = width * height + 1;
outputStride = 2;
}
break;
case 2:
if (colorFormat == COLOR_FormatI420) {
channelOffset = (int) (width * height * 1.25);
outputStride = 1;
} else if (colorFormat == COLOR_FormatNV21) {
channelOffset = width * height;
outputStride = 2;
}
break;
}
ByteBuffer buffer = planes[i].getBuffer();
int rowStride = planes[i].getRowStride();
int pixelStride = planes[i].getPixelStride();
if (VERBOSE) {
Log.v(TAG, "pixelStride " + pixelStride);
Log.v(TAG, "rowStride " + rowStride);
Log.v(TAG, "width " + width);
Log.v(TAG, "height " + height);
Log.v(TAG, "buffer size " + buffer.remaining());
}
//if i==0 fhift=0 else shift=1
int shift = (i == 0) ? 0 : 1;
int w = width >> shift;
int h = height >> shift;
buffer.position(rowStride * (crop.top >> shift) + pixelStride * (crop.left >> shift));
for (int row = 0; row < h; row++) {
int length;
if (pixelStride == 1 && outputStride == 1) {
length = w;
buffer.get(data, channelOffset, length);
channelOffset += length;
} else {
length = (w - 1) * pixelStride + 1;
buffer.get(rowData, 0, length);
for (int col = 0; col < w; col++) {
data[channelOffset] = rowData[col * pixelStride];
channelOffset += outputStride;
}
}
if (row < h - 1) {
buffer.position(buffer.position() + rowStride - length);
}
}
if (VERBOSE) Log.v(TAG, "Finished reading data from plane " + i);
}
return data;
}
下面是bmp轉回YUV420 byte array,也是google找到的,不過跟原本程式不一樣了,我改寫for迴圈內只用weight和height處理每一點像素,這樣才能用Thread,可開啟多個Thread來處理,split=2就是把height切成2段,會開啟2個Thread,第1個Thread處理0到height/2,第2個Thread處理height/2到height,並Thead加入List,當List中所有Thread都執行完畢,才會往下走,split=32,就會開啟32個Thread,本來以為開啟越多Thread會快很多,但我錯了,CPU不快在多Thread也沒用,我這選split=2,還有特別注意一點下面的B=Color.red(argb[num]),R=Color.blue(argb[num]),B應該是blue,R是red才對,這不是寫錯了,是BMP的B跟R是相反的,一開始不知道有這個坑,跑出來顏色一直不正確,在google找好久,找到一篇說BMP的B跟R是相反的,我將B和R對調後顏色就正確了.
public byte[] getNV21(int width, int height, Bitmap scaled) {
int[] argb = new int[width * height];
scaled.getPixels(argb, 0, width, 0, 0, width, height);
byte[] yuv = new byte[width * height * 3 / 2];
encodeYUV420SP(yuv, argb, width, height);
scaled.recycle();
return yuv;
}
public void encodeYUV420SP(byte[] yuv420sp, int[] argb, int width, int height) {
final int frameSize = width * height;
List<caulate> th_list=new ArrayList<>();
//int yIndex = 0;
//int uvIndex = frameSize;
//數字越大,啟用的thread越多,越多thread不一定越快
int split=2;//2的N次方
int dh=height/split;
//使用多個thread處理YUV420SP
for (int i=0;i<height;i=i+dh ){
int end=i+dh;
if(end>height){end=height;}
caulate ac=new caulate(yuv420sp, argb, width, height,i,end);
th_list.add(ac);
}
//當所有thread都結束
for(int j=0;j<th_list.size();j++){
while (th_list.get(j).isAlive()){}
}
}
private class caulate extends Thread{
int[] argb;
byte[] yuv420sp;
int width;
int height;
int jbegin;
int jend;
public caulate(byte[] myuv420sp, int[] margb, int mwidth, int mheight,int mjbegin,int mjend){
yuv420sp=myuv420sp;
argb=margb;
width=mwidth;
height=mheight;
jend=mjend;
jbegin=mjbegin;
this.start();
}
@Override
public void run(){
final int frameSize = width * height;
//int yIndex = 0;
//int uvIndex = frameSize;
int a, R, G, B, Y, U, V;
//int index = 0;
for (int j =jbegin ; j < jend; j++) {
for (int i = 0; i < width; i++) {
int num=i+j*width;
//BMP是BGR 我們把R,B對調
//a = (argb[index] & 0xff000000) >> 24; // a is not used obviously
//R = (argb[index] & 0xff0000) >> 16;
//G = (argb[index] & 0xff00) >> 8;
//B = (argb[index] & 0xff) >> 0;
B=Color.red(argb[num]);
G=Color.green(argb[num]);
R=Color.blue(argb[num]);
// well known RGB to YUV algorithm
Y = ((66 * R + 129 * G + 25 * B + 128) >> 8) + 16;
U = ((-38 * R - 74 * G + 112 * B + 128) >> 8) + 128;
V = ((112 * R - 94 * G - 18 * B + 128) >> 8) + 128;
// NV21 has a plane of Y and interleaved planes of VU each sampled by a factor of 2
// meaning for every 4 Y pixels there are 1 V and 1 U. Note the sampling is every other
// pixel AND every other scanline.
yuv420sp[num] = (byte) ((Y < 0) ? 0 : ((Y > 255) ? 255 : Y));
if (j % 2 == 0 && (num) % 2 == 0) {
//if v<0 等於0,否則V>255 等於255否則等與V
// yuv420sp[uvIndex++] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
// yuv420sp[uvIndex++] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
int unum=i+frameSize;
int u_num=unum+j*width/2;
int u_num1=u_num+1;
//if v<0 等於0,否則V>255 V等於255否則等與V
yuv420sp[u_num] = (byte) ((V < 0) ? 0 : ((V > 255) ? 255 : V));
yuv420sp[u_num1] = (byte) ((U < 0) ? 0 : ((U > 255) ? 255 : U));
//Log.d("TAG_CA","j :" +j);
//Log.d("TAG_CA","u_num :" +u_num1);
//Log.d("TAG_CA","uvIndex :" +uvIndex);
//try{Thread.sleep(50);}catch(Exception e){}
}
//index++;
}
}
}
}
再來說音視頻解碼加編碼,讓視頻跟音頻分離要用MediaExtractor,視頻分離後從MediaExtractor readSampleDataget寫入InputBuffer,queueInputBuffer讓解碼器抽出每一偵getOutputBuffer(或getOutputImage),作Canvas繪製就可丟給編碼器,然後releaseOutputBuffer,編碼器編完碼再用MediaMuxer作合成,總共要2個MediaExtractor(一個也行作音視頻分離),4個MediaCodec(video解碼,video編碼,audio解碼,audio編碼),一個MediaMuxer(作音視頻合成)
整個流程
視頻 -> 解碼(decode) ->繪製處理 - 編碼(encode)
MediaExtractor( 音視頻分離 )-> ->MediaMuxer(作音視頻合成)
音頻 -> 解碼(decode) -> 編碼(encode)
視頻解碼(decode)流程
readSampleDataget(extracter)-> InputBuffer->queueInputBuffer解碼(decode)->getOutputBuffer->canvas繪製處理(處理後可給編碼器)->releaseOutputBuffer
音頻解碼(decode)流程
readSampleDataget(extracter)-> InputBuffer->queueInputBuffer解碼(decode)->getOutputBuffer(buffer資料可給編碼器)->releaseOutputBuffer
視頻編碼(decode)流程
canvas繪製處理(處理後可給編碼器)-> InputBuffer->queueInputBuffer編碼(encode)->getOutputBuffer(buffer寫入合成器)->releaseOutputBuffer
音頻編碼(decode)流程
(buffer資料可給編碼器)-> InputBuffer->queueInputBuffer編碼(encode)->getOutputBuffer(buffer寫入合成器)->releaseOutputBuffer
一開始4個MediaCodec都是用異步處理,在音頻編碼時,聲音異常,一邊跑程式我用AudioTrack來play聲音測試,在解碼這部份過程都OK,但play放到編碼的InputBuffer這段後,聲音測試就不對了,猜想,因為解碼跟編碼不是同步,會有問題,有可能解碼丟2筆,可是編碼再第一或第三筆資料,因為異步處理一個Mediacodec就是一個Thread,要解這問題,可用儲列方法,解碼放到一個List,編碼再一筆一筆拿出來,不過我沒試這方法,我將音頻編碼改用同步處理且不用Thread去處裡,這樣音頻問題就解掉了,寫這程式遇到好多陷阱跟問題.
沒有留言:
張貼留言