自定义函数案例:
文章目录自定义UDF函数
1.需求2.前期maven工程准备3.编程实现4.导包5.导入hive中 自定义UDTF函数
1.需求2.编程实现3.导入hive中
自定义UDF函数 1.需求自定义一个UDF实现计算给定字符串的长度例如
2.前期maven工程准备创建一个maven工程,导入依赖
3.编程实现org.apache.hive hive-exec3.1.2
编写实现类
package com.yingzi.hive1; import org.apache.hadoop.hive.ql.udf.generic.GenericUDF; import org.apache.hadoop.hive.ql.exec.UDFArgumentException; import org.apache.hadoop.hive.ql.exec.UDFArgumentLengthException; import org.apache.hadoop.hive.ql.exec.UDFArgumentTypeException; import org.apache.hadoop.hive.ql.metadata.HiveException; import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector; import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; public class MyStringLength extends GenericUDF { @Override public ObjectInspector initialize(ObjectInspector[] objectInspectors) throws UDFArgumentException { // 判断输入参数的个数 if (objectInspectors.length != 1){ throw new UDFArgumentLengthException("Input Args Length Error!!!"); } // 判断输入参数的类型 if (!objectInspectors[0].getCategory().equals(ObjectInspector.Category.PRIMITIVE)){ throw new UDFArgumentTypeException(0,"Input Args Type Error!!!"); } // 函数本身返回值为int,需要返回int类型的鉴别器对象 return PrimitiveObjectInspectorFactory.javaIntObjectInspector; } @Override public Object evaluate(DeferredObject[] deferredObjects) throws HiveException { if (deferredObjects[0].get() == null){ return 0; } return deferredObjects[0].get().toString().length(); } @Override public String getDisplayString(String[] strings) { return ""; } }4.导包
将上述包,放到虚拟机的hive/lib下
5.导入hive中1)将jar包添加到hive的calsspath
add jar /opt/module/hive/libHIVE_test-1.0-SNAPSHOT.jar
2)创建临时函数与开发好的 java class 关联
create temporary function my_len as "com.yingzi.hive1.MyStringLength";
3)验证是否导入成功!!!
自定义UDTF函数 1.需求自定义一个 UDTF 实现将一个任意分割符的字符串切割成独立的单词,例如:
2.编程实现package com.yingzi.hive1; import org.apache.hadoop.hive.ql.exec.UDFArgumentException; import org.apache.hadoop.hive.ql.metadata.HiveException; import org.apache.hadoop.hive.ql.udf.generic.GenericUDTF; import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspector; import org.apache.hadoop.hive.serde2.objectinspector.ObjectInspectorFactory; import org.apache.hadoop.hive.serde2.objectinspector.StructObjectInspector; import org.apache.hadoop.hive.serde2.objectinspector.primitive.PrimitiveObjectInspectorFactory; import java.util.ArrayList; import java.util.List; public class MYUDTF extends GenericUDTF { private ArrayList3.导入hive中outList = new ArrayList<>(); @Override public StructObjectInspector initialize(StructObjectInspector argOIs) throws UDFArgumentException { //1.定义输出数据的列名和类型 List fieldNames = new ArrayList<>(); List fieldOIs = new ArrayList<>(); //2.添加输出数据的列名和类型 fieldNames.add("lineToWord"); fieldOIs.add(PrimitiveObjectInspectorFactory.javaStringObjectInspector); return ObjectInspectorFactory.getStandardStructObjectInspector(fieldNames, fieldOIs); } @Override public void process(Object[] objects) throws HiveException { //1获取原始数据 String object = objects[0].toString(); //2获取数据传入的第二个参数,此处为分隔符 String splitKey = objects[1].toString(); //3将原始数据按照传入的分隔符进行切分 String[] fields = object.split(splitKey); //4遍历切分后的结果,并写出 for (String field : fields) { //集合为复用的,首先清空集合 outList.clear(); //将每一个单词添加至集合 outList.add(field); //将集合内容写出 forward(outList); } } @Override public void close() throws HiveException { } }
同前面一样
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)