生活随笔
收集整理的這篇文章主要介紹了
MapReduce基础开发之十读写ORC File
小編覺得挺不錯的,現在分享給大家,幫大家做個參考.
1、ORC File
Orc是Hive特有的一種列式存儲的文件格式,它有著非常高的壓縮比和讀取效率,因此很快取代了之前的RCFile,成為Hive中非常常用的一種文件格式。
2、編譯ORC Jar包
? ??http://orc.apache.org/ 下載源代碼orc-1.2.1.tar.gz編譯jar包
? ?用ubuntu14編譯,安裝jdk1.8、cmake3.2.2、Maven3.0.5。
? 解壓orc-1.2.1.tar.gz,進入目錄orc-1.2.1/java下,執行mvn package編譯生成jar文件。
? 編譯完成后進入
orc-1.2.1/java/mapreduce/target獲取orc-mapreduce-1.2.1.jar
orc-1.2.1/java/core/target獲取orc-core-1.2.1.jar
orc-1.2.1/java/tools/target獲取orc-tools-1.2.1.jar
orc-1.2.1/java/tools/target獲取orc-tools-1.2.1-uber.jar
orc-1.2.1/java/ storage-api /target獲取hive-storage-api-2.1.1-pre-orc.jar
四個jar包導入到MapReduce工程lib目錄下。
注意提交到hadoop集群時,第三方獨立jar要一并打包。
?
3、MR讀寫ORC File基礎代碼
import java.io.IOException;import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.NullWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.orc.mapred.OrcStruct;
import org.apache.orc.mapreduce.OrcInputFormat;
import org.apache.orc.mapreduce.OrcOutputFormat;
import org.apache.orc.TypeDescription;public class ORCSample {public static class ORCMapper extends Mapper<NullWritable, OrcStruct, Text, Text> {private Text oKey=new Text();private Text oValue=new Text();public void map(NullWritable key, OrcStruct value, Context context) throws IOException, InterruptedException {//要知道OrcStruct存儲的結構StringBuffer bf = new StringBuffer(); if(value.getNumFields()==3){Text valAcount=(Text)value.getFieldValue(0);Text valDomain=(Text)value.getFieldValue(1);Text valPost=(Text)value.getFieldValue(2);bf.append(valAcount.toString()).append("|").append(valDomain.toString()).append("|").append(valPost.toString());}if (bf.length() > 0) oValue.set(bf.toString());else oValue.set("");oKey.set("");context.write(oKey,oValue);}}/*public static class ORCReducer extends Reducer<Text, Text, Text, Text> {public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {for(Text value:values){String strVal=value.toString();Text okey=new Text();okey.set(strVal);context.write(okey,null);}}}*/public static class ORCReducer extends Reducer<Text, Text, NullWritable, OrcStruct> {//具體OrcStruct字段對應hadoop的定義參考https://orc.apache.org/docs/mapreduce.htmlprivate TypeDescription schema = TypeDescription.fromString("struct<account:string,domain:string,post:string>");private OrcStruct orcs = (OrcStruct) OrcStruct.createValue(schema);private final NullWritable nw = NullWritable.get();public void reduce(Text key, Iterable<Text> values, Context context) throws IOException, InterruptedException {for (Text val : values) {if(val.toString() ==null) continue;String[] strVals=val.toString().split("\\|");if(strVals.length==3){Text txtAccount=new Text();txtAccount.set(strVals[0]);orcs.setFieldValue(0, txtAccount);Text txtDomain=new Text();txtAccount.set(strVals[1]);orcs.setFieldValue(1, txtDomain);Text txtPost=new Text();txtAccount.set(strVals[2]);orcs.setFieldValue(2, txtPost);context.write(nw, orcs);} }}}public static void main(String args[]) throws Exception {Configuration conf = new Configuration();//要設置結構,否則reduce會提示輸入空值conf.set("orc.mapred.output.schema","struct<account:string,domain:string,post:string>");Job job = new Job(conf, "ORCSample"); job.setJarByClass(ORCSample.class);job.setMapperClass(ORCMapper.class);job.setReducerClass(ORCReducer.class);//map類型設置job.setInputFormatClass(OrcInputFormat.class);job.setMapOutputKeyClass(Text.class);//優先于setOutputKeyClass生效于mapjob.setMapOutputValueClass(Text.class);//reduce類型設置 job.setNumReduceTasks(1);job.setOutputFormatClass(OrcOutputFormat.class);job.setOutputKeyClass(NullWritable.class);job.setOutputValueClass(OrcStruct.class);//job.setOutputKeyClass(Text.class);//對map和reduce輸出都生效//job.setOutputValueClass(Text.class);//輸入輸出路徑FileInputFormat.addInputPath(job, new Path(args[0]));FileOutputFormat.setOutputPath(job, new Path(args[1]));System.exit(job.waitForCompletion(true) ? 0 : 1);}
}
?? ?
總結
以上是生活随笔為你收集整理的MapReduce基础开发之十读写ORC File的全部內容,希望文章能夠幫你解決所遇到的問題。
如果覺得生活随笔網站內容還不錯,歡迎將生活随笔推薦給好友。