https://community.hortonworks.com/questions/25789/how-to-dump-the-output-from-beeline.html
Hi,
I am trying to dump the output of a beeline query (below) to a file but it prints all the logs along with the output in the file.
beeline –u jdbc:hive2://somehost_ip/ -f hive.hql >op.txt
Here is the output
0: jdbc:hive2://10.211.1.5:10000/> use db;
0: jdbc:hive2://10.211.1.5:10000/> select count(*) from sample_table;
+------+--+
| _c0 |
+------+--+
| 131 |
+------+--+
0: jdbc:hive2://10.211.1.5:10000/>
0: jdbc:hive2://10.211.1.5:10000/>
0: jdbc:hive2://10.211.1.5:10000/>
Can someone let me know how to get the output alone in a file as we get using Hive?
--showHeader=[true/false] show column names in query results
--headerInterval=ROWS; the interval between which heades are displayed
--fastConnect=[true/false] skip building table/column list for tab-completion
--autoCommit=[true/false] enable/disable automatic transaction commit
--verbose=[true/false] show verbose error messages and debug info
--showWarnings=[true/false] display connection warnings
--showNestedErrs=[true/false] display nested errors
--numberFormat=[pattern] format numbers using DecimalFormat pattern
--force=[true/false] continue running script even after errors
--maxWidth=MAXWIDTH the maximum width of the terminal
--maxColumnWidth=MAXCOLWIDTH the maximum width to use when displaying columns
--silent=[true/false] be more silent
--autosave=[true/false] automatically save preferences
--outputformat=[table/vertical/csv2/tsv2/dsv/csv/tsv] format mode for result display
Note that csv, and tsv are deprecated - use csv2, tsv2 inste
You have different options.
1) You can control up to a point how the beeline output is made and then just save it to a file with linux. For example
beeline –outputformat=csv2 xxx > output.csv ( see the relevant parameters from the beeline help below )
2) For more control and better performance I wrote a little Java tool once. Its really only a couple lines of jdbc code.
3) and finally as Ana wrote. Yopu can just write a table into an external table in HDFS and specify the output format you want.
Like
create external table test ROW FORMAT delimited fields terminated by ‘|’ location “/tmp/myfolder” as select * from mytable;
you can then get that output in the local file system with
hadoop fs -getmerge /tmp/myfolder myoutput.csv