Hive应用实践-安装并配置mysql

Hive应用实践-安装并配置mysql,第1张

(已安装则跳过)

yum -y install wget

安装这个包后,会获得两个mysql的yum repo源:

修改配置文件/etc/my.cnf,添加

然后,重启服务:

MYSQL就可以无密码登录了

删除配置文件my.cnf中的skip-grant-tables

重启服务: systemctl restart mysqld

使用密码登录: mysql -uroot -p

注:需要关闭centos防火墙

查看版本号:

mysql -V

启动mysql服务

systemcl start mysqld.service

停止mysql服务

systemctl stop mysqld.service

重启mysql服务

systemctl restart mysqld.service

查看mysql服务当前状态

systemctl status mysqld.service

设置mysql服务开机自启动

systemctl enable mysqld.service

停止mysql服务开机自启动

systemctl disable mysqld.service

rpm -qa|grep mariadb

rpm -e --nodeps mariadb-libs-5.5.44-2.el7.centos.x86_64

输出内容最后root@localhost:XXXXXXXX中的XXXX为初始密码

编写后保存

1、hive 命令行模式,直接输入/hive/bin/hive的执行程序,或者输入 hive --service cli

用于linux命令行查询,查询语句基本跟mysql查询语句类似

2、 hive web界面的 (端口号9999) 启动方式

hive –service hwi

用于通过浏览器来访问hive,感觉没多大用途

3、 hive 远程服务 (端口号10000) 启动方式

hive --service hiveserver

或者

hive --service hiveserver 10000>/dev/null 2>/dev/null

备注:

连接Hive JDBC URL:jdbc:hive://192.168.6.116:10000/default (Hive默认端口:10000 默认数据库名:default)

#hive相关资料

#http://blog.csdn.net/u013310025/article/details/70306421

#https://www.cnblogs.com/guanhao/p/5641675.html

#http://blog.csdn.net/wisgood/article/details/40560799

#http://blog.csdn.net/seven_zhao/article/details/46520229

#获取主机相关信息

export password='qwe'

export your_ip=$(ip ad|grep inet|grep -v inet6|grep -v 127.0.0.1|awk '{print $2}'|cut -d/ -f1)

export your_hosts=$(cat /etc/hosts |grep $(echo $your_ip)|awk '{print $2}')

#安装mysql

echo "mysql-server-5.5 mysql-server/root_password password $password" | debconf-set-selections

echo "mysql-server-5.5 mysql-server/root_password_again password $password" | debconf-set-selections

apt-get -y install mariadb-server python-pymysql --force-yes

echo "[mysqld]

bind-address = $your_ip

default-storage-engine = innodb

innodb_file_per_table

max_connections = 4096

collation-server = utf8_general_ci

character-set-server = utf8" | tee >/etc/mysql/conf.d/openstack.cnf

sed -i "s/127.0.0.1/0.0.0.0/g" /etc/mysql/mariadb.conf.d/50-server.cnf

service mysql restart

#创建hive用户和赋予权限

mysql -uroot -p$password <<EOF

CREATE DATABASE hive

CREATE USER 'hive' IDENTIFIED BY "$password"

GRANT ALL PRIVILEGES ON  *.* TO 'hive'@'%' WITH GRANT OPTION

flush privileges

EOF

#增加hive环境变量

hive_flag=$(grep "hive" /etc/profile)

if [ ! -n "$hive_flag" ]then

    sed -i "s/\$PATH:/\$PATH:\/opt\/apache-hive-2.3.2-bin\/bin:/g" /etc/profile

else

    echo "Already exist!"

fi

#使脚本中环境变量生效

source /etc/profile

#修改hive配置

echo "$(grep "JAVA_HOME=" /etc/profile)

$(grep "HADOOP_HOME=" /etc/profile)

export HIVE_HOME=/opt/apache-hive-2.3.2-bin

export HIVE_CONF_DIR=/opt/apache-hive-2.3.2-bin/conf" |tee >>/opt/apache-hive-2.3.2-bin/conf/hive-env.sh

sed -i "s/hadoop3/$your_hosts/g" /opt/apache-hive-2.3.2-bin/conf/hive-site.xml

#在hdfs 中创建下面的目录 ,并赋予所有权限

hdfs dfs -mkdir -p /user/hive/warehouse

hdfs dfs -mkdir -p /user/hive/tmp

hdfs dfs -mkdir -p /user/hive/log

hdfs dfs -chmod -R 777 /user/hive/warehouse

hdfs dfs -chmod -R 777 /user/hive/tmp

hdfs dfs -chmod -R 777 /user/hive/log

mkdir -p /user/hive/tmp

#初始化hive

schematool -dbType mysql -initSchema

#安装hive到此结束

#######################

#创建hive表

create table film

(name string,

time string,

score string,

id int,

time1 string,

score1 string,

name2 string,

score2 string)

ROW FORMAT DELIMITED

FIELDS TERMINATED BY ''

STORED AS TEXTFILE

#将本地文本导入hive

load data local inpath '/root/my.txt' overwrite into table film

#hive相关笔记

create table patition_table(name string,salary float,gender string)  partitioned by (dt string,dep string)  row format delimited fields terminated by ',' stored as textfile

create database movie

create table movie(name string,data string,record int)

#删除表

DROP TABLE if exists movies

#创建表

CREATE TABLE movies(

    name string,

    data string,

    record int

) COMMENT '2014全年上映电影的数据记录' FIELDS TERMINATED BY '\t' STORED AS TEXTFILE

load data local inpath 'dat0204.log' into table movies

#hive 中使用dfs命令

hive>dfs -ls /user/hive/warehouse/wyp

select * from movies

hive -e "select * from test" >>res.csv 

或者是: 

hive -f sql.q >>res.csv 

其中文件sql.q写入你想要执行的查询语句 

#导出到本地文件系统

hive>insert overwrite local directory '/home/wyp/wyp'

hive>select * from wyp

导出到HDFS中

和导入数据到本地文件系统一样的简单,可以用下面的语句实现:

hive>insert overwrite directory '/home/wyp/hdfs'

hive>select * from wyp

将会在HDFS的/home/wyp/hdfs目录下保存导出来的数据。注意,和导出文件到本地文件系统的HQL少一个local,数据的存放路径就不一样了。

#将提取到的数据保存到临时表中

insert overwrite table movies

本地加载  load data local inpath '/Users/tifa/Desktop/1.txt' into table test

从hdfs上加载数据  load data inpath '/user/hadoop/1.txt' into table test_external 

抹掉之前的数据重写  load data inpath '/user/hadoop/1.txt' overwrite into table test_external


欢迎分享,转载请注明来源:内存溢出

原文地址: https://outofmemory.cn/zaji/5900037.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2023-03-07
下一篇 2023-03-07

发表评论

登录后才能评论

评论列表(0条)

保存