Browsed by
Tag: bigdata

使用CDH示例程序进行字数统计

使用CDH示例程序进行字数统计

Wordcount程序是Hadoop上的经典“HelloWorld”程序。CDH系统自带了wordcount程序来检测部署的成功与否。

# 解压提前准备好的莎士比亚全集
[sujx@elephant ~]gzip -d shakespeare.txt.gz

# 上传至hadoop文件系统
[sujx@elephant ~] hdfs dfs -mkdir /user/sujx/input
[sujx@elephant ~]hdfs dfs -put shakespeare.txt /user/sujx/input

# 查看有哪些测试程序可用
[sujx@elephant ~] hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar
An example program must be given as the first argument.
Valid program names are:
  aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
  aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
  bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi.  dbcount: An example job that count the pageview counts from a database.
  distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi.    grep: A map/reduce program that counts the matches of a regex in the input.
  join: A job that effects a join over sorted, equally partitioned datasets
  multifilewc: A job that counts words from several files.
  pentomino: A map/reduce tile laying program to find solutions to pentomino problems.       pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method.
  randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.   randomwriter: A map/reduce program that writes 10GB of random data per node.
  secondarysort: An example defining a secondary sort to the reduce.
  sort: A map/reduce program that sorts the data written by the random writer.
  sudoku: A sudoku solver.
  teragen: Generate data for the terasort
  terasort: Run the terasort
  teravalidate: Checking results of terasort
  wordcount: A map/reduce program that counts the words in the input files.
  wordmean: A map/reduce program that counts the average length of the words in the input files.
  wordmedian: A map/reduce program that counts the median length of the words in the input files.
  wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.

# 执行mapreduce运算,output文件夹会自动建立
[sujx@elephant ~]hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar wordcount /user/sujx/input/shakespeare.txt /user/sujx/output/

# 查看输出结果
[sujx@elephant ~] hdfs dfs -ls /user/sujx/output
Found 4 items
-rw-r--r--   3 sujx supergroup          0 2020-03-09 02:47 /user/sujx/output/_SUCCESS      -rw-r--r--   3 sujx supergroup     238211 2020-03-09 02:47 /user/sujx/output/part-r-00000  -rw-r--r--   3 sujx supergroup     236617 2020-03-09 02:47 /user/sujx/output/part-r-00001  -rw-r--r--   3 sujx supergroup     238668 2020-03-09 02:47 /user/sujx/output/part-r-00002 

# 查看输出内容
[sujx@elephant ~]$ hdfs dfs -tail /user/sujx/output/part-r-00000
.       3
writhled        1
writing,        4
writings.       1
writs   1
written,        3
wrong   112
wrong'd-        1
wrong-should    1
wrong.  39
wrong:  1
wronged 11
wronged.        3
wronger,        1
wronger;        1
wrongfully?     1
wrongs  40
wrongs, 9
wrongs; 9
wrote?  1
wrought,        4
…………
HDFS初步使用

HDFS初步使用

HDFS(Hadoop Distributed File System)是可扩展、容错、高性能的分布式文件系统,异步复制,一次写入多次读取,主要负责存储。其概念和内容可以参考[1]。这里就做一个简单的实验来看一下其文件管理的功能。更多的Hadoop命令可以参考[2]

用户建立

在实验环境中,不建议使用root账号直接登录运行,所以建立一个普通账号。

# Elephant主机执行
# 安装ansible
[root@elephant ~]# yum install -y ansible
# 在/etc/ansible/hosts中新增所有主机名
# 建立ansible文件
[root@elephant ~]# mkdir playbook
[root@elephant ~]# vim ./playbook/useradd.yaml
---
- hosts: hadoop
  remote_user: root
  vars_prompt:
    - name: user_name
      prompt: Enter Username
      private: no
    - name: user_passwd
      prompt: Enter Password
      encrypt: "sha512_crypt"
      confirm: yes
  tasks:
    - name: create user
      user:
        name: "{{user_name}}"
        password: "{{user_passwd}}"

# 执行
[root@elephant ~]# ansible-playbook ./playbook/useradd.yaml
Enter Username: sujx
Enter Password:
confirm Enter Password:

PLAY [hadoop] *****************************************************************************

HDFS文件使用

我们先将准备的481M文件(access.log)上传至用户家目录,看看这个文件将在hdfs文件系统中如何存储。

[root@elephant ~]# su hdfs
[hdfs@elephant root]hadoop fs -mkdir /user/sujx
[hdfs@elephant root] hadoop fs -chown sujx /user/sujx
[hdfs@elephant root]su sujx -
[hdfs@elephant root] hdfs dfs -put access_log weblog

然后,我们通过hdfs的web控制台就可以看到文件的存储情况,可见文件以3副本的形式,按照每个128M大小的存储块分割存储在namenode之上。当前情况是分成了4块。
show
block

我们可以从lion主机中看到存储的数据块。

[root@lion ~]# tree /dfs
/dfs
`-- dn
    |-- current
    |   |-- BP-752680285-192.168.174.131-1582986010714
    |   |   |-- current
    |   |   |   |-- VERSION
    |   |   |   |-- dfsUsed
    |   |   |   |-- finalized
    |   |   |   |   `-- subdir0
    |   |   |   |       |-- subdir0
    |   |   |   |       |-- subdir1
    |   |   |   |       |-- subdir2
    |   |   |   |       |-- subdir3
    |   |   |   |       |   |-- blk_1073742734
    |   |   |   |       |   |-- blk_1073742734_1910.meta
    |   |   |   |       `-- subdir4
    |   |   |   `-- rbw
    |   |   |-- scanner.cursor
    |   |   `-- tmp
    |   `-- VERSION
    `-- in_use.lock
离线部署CDH5.16.2

离线部署CDH5.16.2

在实践了CDH6的离线部署之后,发现Cloudera的官方教程是基于CDH5.10的。同时,CDH6的内存占用比较高,我的机器有点遭不住,所以又把CDH5的部署实施走了一遍。

模板部署

  1. 安装CentOS7.10,并完成yum升级;
  2. 所有主机时间与阿里云时间服务器同步;
  3. 关闭selinux和firewalld;
  4. 安装JDK和mysql-connect;
[root@localhost ~]# wget https://repo.huaweicloud.com/java/jdk/8u202-b08/jdk-8u202-linux-x64.rpm
[root@localhost ~]# yum localinstall jdk-8u202-linux-x64.rpm
[root@localhost ~]# mv mysql-connector-java-5.1.39-bin.jar /usr/share/java/mysql-connector-java.jar 
  1. 关闭透明大页;
[root@localhost ~]# vim /etc/rc.d/rc.local

# 在最后一行添加如下内容
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then
echo never > /sys/kernel/mm/transparent_hugepage/enabled
fi

if test -f /sys/kernel/mm/transparent_hugepage/defrag; then
echo never > /sys/kernel/mm/transparent_hugepage/defrag
fi

# 赋予rc.local具有可执行权限
[root@localhost ~]# chmod +x /etc/rc.d/rc.local
[root@localhost ~]# sh /etc/rc.d/rc.local

# 检查
[root@localhost ~]# cat /sys/kernel/mm/transparent_hugepage/defrag
always madvise [never]
[root@localhost ~]# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
  1. 调整swap策略
[root@localhost ~]# echo 10 > /proc/sys/vm/swappiness
[root@localhost ~]# echo vm.swappiness = 10 >> /etc/sysctl.conf
  1. 修改文件句柄数
[root@localhost ~]# vim /etc/security/limits.conf
# 添加如下内容
* soft nofile 100000
* hard nofile 100000
  1. 收尾
[root@localhost ~]# sys-unconfig
  1. 下载CM和CDH

CM下载地址

CDH下载地址

Agent部署

  1. 配置主机ssh信任
  2. 安装ansible

# 安装 [root@elephant ~]# yum install -y ansible [root@elephant ~]# ls CDH-5.16.2-1.cdh5.16.2.p0.8-el5.parcel.sha1 CDH-5.16.2-1.cdh5.16.2.p0.8-el7.parcel Mail anaconda-ks.cfg cloudera-manager-agent-5.16.2-1.cm5162.p0.7.el7.x86_64.rpm cloudera-manager-daemons-5.16.2-1.cm5162.p0.7.el7.x86_64.rpm cloudera-manager-server-5.16.2-1.cm5162.p0.7.el7.x86_64.rpm cloudera-manager-server-db-2-5.16.2-1.cm5162.p0.7.el7.x86_64.rpm enterprise-debuginfo-5.16.2-1.cm5162.p0.7.el7.x86_64.rpm hadoop.tar.gz jdk-8u202-linux-x64.rpm manifest.json mysql-connector-java-5.1.39-bin.jar [root@elephant ~]# mkdir client [root@elephant ~]# mv cloudera-manager-daemons-5.16.2-1.cm5162.p0.7.el7.x86_64.rpm client/ [root@elephant ~]# mv cloudera-manager-agent-5.16.2-1.cm5162.p0.7.el7.x86_64.rpm client/ [root@elephant ~]# tar zcvf client.tar.gz client/
  1. 分发
[root@elephant ~]# ansible all -m copy -a 'src=/root/client.tar.gz dest=/root/'
[root@elephant ~]# ansible all -a 'tar zxf /root/client.tar.gz'
  1. 安装
[root@elephant ~]# ansible all -a 'yum localinstall /root/client/cloudera-manager-daemons-5.16.2-1.cm5162.p0.7.el7.x86_64.rpm -y'
[root@elephant ~]# ansible all -a 'yum localinstall /root/client/cloudera-manager-agent-5.16.2-1.cm5162.p0.7.el7.x86_64.rpm -y'

# 修改配置文件
[root@elephant ~]# sed -i ':s/localhost/lion/g' /etc/cloudera-scm-agent/config.ini
[root@elephant ~]# ansible lion,tiger,horse,monkey -m copy -a 'src=/etc/cloudera-scm-agent/config.ini dest=/etc/cloudera-scm-agent/'

# 重启服务
[root@elephant ~]# ansible all -a 'systemctl enable cloudera-scm-agent --now'
[root@elephant ~]# ansible all -a 'systemctl restart cloudera-scm-agent'

管理主机部署

  1. 安装
[root@elephant ~]# ansible lion -m copy -a 'src=/root/cloudera-manager-server-5.16.2-1.cm5162.p0.7.el7.x86_64.rpm dest=/root/'
[root@elephant ~]# ansible lion -m copy -a 'src=/root/CDH-5.16.2-1.cdh5.16.2.p0.8-el7.parcel dest=/opt/cloudera/parcel-repo'
[root@elephant ~]# ansible lion -m copy -a 'src=/root/manifest.json dest=/opt/cloudera/parcel-repo'
[root@elephant ~]# ansible lion -a 'yum localinstall /root/cloudera-manager-server-5.16.2-1.cm5162.p0.7.el7.x86_64.rpm -y'
  1. 数据库部署
[root@elephant ~]# ansible lion -a 'yum install -y mariadb mariadb-server'
[root@elephant ~]# ssh lion
[root@lion ~]# vim /etc/my.cnf.d/server.cnf
[mysqld]
key_buffer = 16M
key_buffer_size = 32M
max_allowed_packet = 32M
thread_stack = 256K
thread_cache_size = 64
query_cache_limit = 8M
query_cache_size = 64M
query_cache_type = 1
max_connections = 550
server_id=1

binlog_format = mixed
read_buffer_size = 2M
read_rnd_buffer_size = 16M
sort_buffer_size = 8M
join_buffer_size = 8M

# InnoDB settings
innodb_file_per_table = 1
innodb_flush_log_at_trx_commit  = 2
innodb_log_buffer_size = 64M
innodb_buffer_pool_size = 4G
innodb_thread_concurrency = 8
innodb_flush_method = O_DIRECT
innodb_log_file_size = 512M

[root@lion ~]# systemctl enable mariadb --now

vim cdh.sql

CREATE DATABASE scm DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON scm.* TO 'scm'@'%' IDENTIFIED BY 'passwd';
CREATE DATABASE amon DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON amon.* TO 'amon'@'%' IDENTIFIED BY 'passwd';
CREATE DATABASE rman DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON rman.* TO 'rman'@'%' IDENTIFIED BY 'passwd';
CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON hue.* TO 'hue'@'%' IDENTIFIED BY 'passwd';
CREATE DATABASE metastore DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON metastore.* TO 'hive'@'%' IDENTIFIED BY 'passwd';
CREATE DATABASE sentry DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON sentry.* TO 'sentry'@'%' IDENTIFIED BY 'passwd';
CREATE DATABASE nav DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON nav.* TO 'nav'@'%' IDENTIFIED BY 'passwd';
CREATE DATABASE navms DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON navms.* TO 'navms'@'%' IDENTIFIED BY 'passwd';
CREATE DATABASE oozie DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
GRANT ALL ON oozie.* TO 'oozie'@'%' IDENTIFIED BY 'passwd';

# 导入数据库
[root@lion ~]# mysql -uroot -p < cdh.sql

# 扩展数据库架构和CDH6有区别
[root@lion ~]# cp /usr/share/java/mysql-connector-java.jar /usr/share/cmf/lib/
[root@lion ~]# sh /usr/share/cmf/schema/scm_prepare_database.sh mysql scm scm
Enter SCM password:
JAVA_HOME=/usr/java/jdk1.8.0_202-amd64
Verifying that we can write to /etc/cloudera-scm-server
Creating SCM configuration file in /etc/cloudera-scm-server
Executing:  /usr/java/jdk1.8.0_202-amd64/bin/java -cp /usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:/usr/share/java/postgresql-connector-java.jar:/usr/share/cmf/schema/../lib/* com.cloudera.enterprise.dbutil.DbCommandExecutor /etc/cloudera-scm-server/db.properties com.cloudera.cmf.db.
[                          main] DbCommandExecutor              INFO  Successfully connected to database.
All done, your SCM database is configured correctly!

# 启动服务
[root@lion parcel-repo]# cd /opt/cloudera/parcel-repo/
[root@lion parcel-repo]# sha1sum CDH-5.16.2-1.cdh5.16.2.p0.8-el7.parcel | awk '{ print $1 }' > CDH-5.16.2-1.cdh5.16.2.p0.8-el7.parcel.sha
[root@lion ~]# chkconfig cloudera-scm-server on
[root@lion ~]# systemctl start cloudera-scm-server

# 检查结果
[root@lion ~]# netstat -tlnp |grep 7180
tcp        0      0 0.0.0.0:7180            0.0.0.0:*               LISTEN      3907/java 

GUI安装

接下来的安装就相对简单了,基本比CDH6简单。

start
start
start
start
start
start
start
start
start
start
start
end