https://community.hortonworks.com/questions/25789/how-to-dump-the-output-from-beeline.html Hi, I am trying to dump the output of a beeline query (below) to a file but it prints all the logs along with the output in the file. beeline –u... Read More | Share it now!
Category Archives: Hive
HIVE nested ARRAY in MAP data type
http://stackoverflow.com/questions/18812025/hive-nested-array-in-map-data-type Hive’s default delimiters are: Row Delimiter => Control-A (‘\001’) Collection Item Delimiter => Control-B (‘\002’) Map Key Delimiter... Read More | Share it now!
hive array、map、struct使用
hive提供了复合数据类型: Structs: structs内部的数据可以通过DOT(.)来存取,例如,表中一列c的类型为STRUCT{a INT; b... Read More | Share it now!
hive的数据导入与数据导出:(本地,云hdfs,hbase),列分隔符的设置
hive表的数据源有四种: hbase hdfs 本地 其他hive表 而hive表本身有两种: 内部表和外部表。 而hbase的数据在hive中,可以建立对应的外部表(参看hive和hbase整合) 内部表和外部表 区别:删除时,内部表删除hadoop上的数据;而外部表不删,其数据在外部存储,hive表只是查看数据的形式,看时从外部读入数据: 内部表:CREATETABLE... Read More | Share it now!
Hive中添加自定义udf udaf udtf等函数的jar文件的三种方法
在开发了hive的udf udaf udtf函数的jar文件后,需要将jar文件放入hive的环境中才可以使用。可以通过以下三种方法加入: 1. 使用add jar... Read More | Share it now!
Hive Function Cheat Sheet
Hive Function Meta commands SHOW FUNCTIONS– lists Hive functions and operators DESCRIBE FUNCTION – displays short description of the function DESCRIBE FUNCTION EXTENDED – access extended description of the function Types of Hive... Read More | Share it now!
“add jar” command throws “Insufficient privileges to execute add” exception even for admin role
https://issues.apache.org/jira/browse/SENTRY-147 0: jdbc:hive2://......:10000/default> add jar /home/direp_hv_qa/satish/hive-udf-7.jar; Error: Insufficient privileges to execute add (state=42000,code=0) java.sql.SQLException: Insufficient... Read More | Share it now!
Hive:简单查询不启用Mapreduce job而启用Fetch task
如果你想查询某个表的某一列,Hive默认是会启用MapReduce Job来完成这个任务,如下: 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 hive> SELECT id,... Read More | Share it now!
Hive中配置Parquet(CDH4.3)
CDH4.3版本中并没有提供现成的Parquet安装包,所以如果在Hive或Impala中需要使用Parquet格式,需要手动进行安装,当创建Parquet格式的表时,需要定义Parquet相关的InputFormat,OutputFormat,Serde,建表语句如下 hive> create table parquet_test(x int, y string) > row format serde 'parquet.hive.serde.ParquetHiveSerDe' > stored as inputformat 'parquet.hive.DeprecatedParquetInputFormat' > outputformat 'parquet.hive.DeprecatedParquetOutputFormat'; FAILED: SemanticException : Output Format must implement HiveOutputFormat, otherwise it should be either IgnoreKeyTextOutputFormat or SequenceFileOutputFormat 提交语句会报错,原因是parquet.hive.DeprecatedParquetOutputFormat类并没有在Hive的CLASSPATH中配置,此类属于$IMPALA_HOME/lib目录下的parquet-hive-1.2.5.jar,所以在$HIVE_HOME/lib目录下建立个软链就可以了 cd $HIVE_HOME/lib ln -s $IMPALA_HOME/lib/parquet-hive-1.2.5.jar 继续提交建表语句,报错如下 hive> create table parquet_test(x int, y string) > row format serde 'parquet.hive.serde.ParquetHiveSerDe' > stored as inputformat 'parquet.hive.DeprecatedParquetInputFormat' > outputformat 'parquet.hive.DeprecatedParquetOutputFormat'; Exception in thread "main" java.lang.NoClassDefFoundError: parquet/hadoop/api/WriteSupport at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:247) at org.apache.hadoop.hive.ql.plan.CreateTableDesc.validate(CreateTableDesc.java:403) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeCreateTable(SemanticAnalyzer.java:8858) at org.apache.hadoop.hive.ql.parse.SemanticAnalyzer.analyzeInternal(SemanticAnalyzer.java:8190) at org.apache.hadoop.hive.ql.parse.BaseSemanticAnalyzer.analyze(BaseSemanticAnalyzer.java:258) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:459) at org.apache.hadoop.hive.ql.Driver.compile(Driver.java:349) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:938) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:902) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:259) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:216) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:412) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:759) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:613) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.apache.hadoop.util.RunJar.main(RunJar.java:208) Caused by: java.lang.ClassNotFoundException: parquet.hadoop.api.WriteSupport at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) ... 20 more 报错的原因是因为缺少一些Parquet相关的jar文件,直接下载到$HIVE_HOME/lib目录下即可 cd /usr/lib/hive/lib for f in parquet-avro parquet-cascading parquet-column parquet-common parquet-encoding parquet-generator parquet-hadoop parquet-hive parquet-pig parquet-scrooge parquet-test-hadoop2 parquet-thrift > do > curl -O https://oss.sonatype.org/service/local/repositories/releases/content/com/twitter/${f}/1.2.5/${f}-1.2.5.jar > done > curl -O https://oss.sonatype.org/service/local/repositories/releases/content/com/twitter/parquet-format/1.0.0/parquet-format-1.0.0.jar 继续提交建表语句,正常通过。成功建表后,需要将其他表中的数据Load到Parquet格式的表中,在执行HQL过程中,需要使用Parquet相关的jar文件,有两种方法,一种是在运行语句前对每一个jar都执行add... Read More | Share it now!
Hadoop Hive sql语法详解
Hive 是基于Hadoop 构建的一套数据仓库分析系统,它提供了丰富的SQL查询方式来分析存储在Hadoop... Read More | Share it now!