2016-01-26 12:26:49
来 源
中存储网
HBase
HBASE安装和简单测试过程分享给新手,修改HDFSS设置,增加下面的设置,HBASE需要访问大量的文件,修改nofile和nproc设置。

HBASE安装和简单测试过程分享给新手

1. 修改HDFSS设置

vi conf/hdfs-site.xml

增加下面的设置,HBASE需要访问大量的文件

<property>

<name>dfs.datanode.max.xcievers</name>

<value>4096</value>

</property>

2. 设置NTP同步

rpm -qa |grep ntp

master用缺省配置

slaves:

vi /etc/ntp.conf

server 192.168.110.127

把缺省配置的server都去掉,改为master的地址

chkconfig ntpd on

service ntpd restart

另外:最好使用同一时区

ln -sf /usr/share/zoneinfo/posix/Asia/Shanghai /etc/localtime 

3. 修改nofile和nproc设置

HBase需要使用很多文件,每次flush都是写一个新文件,缺省1024远远不够

vi /etc/security/limits.conf

hadoop - nofile 32768

hadoop - nproc 32768

重新登录hadoop,验证一下

ulimit -a

4.下载和安装

到http://hbase.apache.org去下载最新的稳定版本

tar zxf hbase-0.92.1.tar.gz

5. 设置环境变量 

export HBASE_HOME=$HOME/hbase-0.92.1

export HBASE_CONF_DIR=$HOME/hbase-conf

同时设置添加到PATH和CLASSPATH

6. 配置

cp -r $HBASE_HOME/conf $HOME/hbase-conf

vi hbase-env.sh

export JAVA_HOME=/usr/lib/jvm/java-1.6.0-openjdk-1.6.0.0.x86_64/jre

export HBASE_HEAPSIZE=300

export HBASE_OPTS="-XX:+UseConcMarkSweepGC"

export HBASE_LOG_DIR=${HBASE_HOME}/logs

export HBASE_MANAGES_ZK=true

vi hdfs-site.xml

添加下面的配置,开启durable sync特性,Hadoop 0.20.205以上的版本有的功能

开启这个功能十分重要,否则HBASE会丢失数据。(本人猜测,这是些Hlog的需要,需要随时appdend redo log,而HDFS一般只能建新文件)

<configuration>

<property>

<name>dfs.support.append</name>

<value>true</value>

</property>

</configuration>

在Hadoop -.20.205的elease notes上有这么一句:

* This release includes a merge of append/hsynch/hflush capabilities from 0.20-append branch, to support HBase in secure mode.

7. 设置Fully-distributed模式

vi hdfs-site.xml

设置hbase.rootdir和hbase.cluster.distributed=true

<property>

<name>hbase.rootdir</name>

<value>hdfs://master:9000/hbase</value>

</property>

<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

</property>

8. 设置RegionServers

cat regionservers

slave1

slave2

9. 配置ZooKeepers

vi hbase-env.sh

export HBASE_MANAGES_ZK=true

vi hdfs-site.xml

<property>

<name>hbase.zookeeper.property.clientPort</name>

<value>2222</value>

</property>

<property>

<name>hbase.zookeeper.quorum</name>

<value>slave1,slave2</value>

</property>

<property>

<name>hbase.zookeeper.property.dataDir</name>

<value>/home/hadoop/zookeeper</value>

</property>

10. 复制安装配置到其他节点

scp -r conf slave1:

scp -r conf slave2:

scp -r hbase-conf slave1:

scp -r hbase-conf slave2:

scp -r hbase-0.92.1 slave1:

scp -r hbase-0.92.1 slave2:

scp -r .bash_profile slave1:

scp -r .bash_profile slave2:

11. 重新登录,重启hadoop

stop-all.sh

start-all.sh

jps

12. 启动HBASE

start-hbase.sh

验证,

用jps命令查看java进程

Master上有

11420 HMaster

ZoomKeeper上有

575 HQuorumPeer

RegionServer上有

686 HRegionServer

13 简单测试

hbase shell

hbase(main):006:0> create 'test','data'

0 row(s) in 1.1190 seconds

hbase(main):007:0> list

TABLE

test

1 row(s) in 0.0270 seconds

hbase(main):009:0> put 'test','1','data:1','xxxx'

0 row(s) in 0.1220 seconds

hbase(main):010:0> put 'test','1','data:1','xxxx'

0 row(s) in 0.0120 seconds

hbase(main):011:0> put 'test','1','data:1','xxxx'

0 row(s) in 0.0120 seconds

hbase(main):015:0* put 'test','2','data:2','yyy'

0 row(s) in 0.0080 seconds

hbase(main):016:0> put 'test','3','data:3','zzz'

0 row(s) in 0.0070 seconds

hbase(main):017:0>

hbase(main):018:0*

hbase(main):019:0* scan 'test'

ROW COLUMN+CELL

1 column=data:1, timestamp=1333160616029, value=xxxx

2 column=data:2, timestamp=1333160650780, value=yyy

3 column=data:3, timestamp=1333160664490, value=zzz

3 row(s) in 0.0260 seconds

hbase(main):020:0>

14. 查看了HDFS上建立的文件

./hadoop dfs -lsr /hbase

Warning: $HADOOP_HOME is deprecated.

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/-ROOT-

-rw-r--r-- 2 hadoop supergroup 551 2012-03-31 10:07 /hbase/-ROOT-/.tableinfo.0000000001

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/-ROOT-/.tmp

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/-ROOT-/70236052

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/-ROOT-/70236052/.oldlogs

-rw-r--r-- 2 hadoop supergroup 411 2012-03-31 10:07 /hbase/-ROOT-/70236052/.oldlogs/hlog.1333159627476

-rw-r--r-- 2 hadoop supergroup 109 2012-03-31 10:07 /hbase/-ROOT-/70236052/.regioninfo

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/-ROOT-/70236052/info

-rw-r--r-- 2 hadoop supergroup 714 2012-03-31 10:07 /hbase/-ROOT-/70236052/info/bd225e173164476f88111f622f5a7839

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.META.

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.META./1028785192

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.META./1028785192/.oldlogs

-rw-r--r-- 2 hadoop supergroup 124 2012-03-31 10:07 /hbase/.META./1028785192/.oldlogs/hlog.1333159627741

-rw-r--r-- 2 hadoop supergroup 111 2012-03-31 10:07 /hbase/.META./1028785192/.regioninfo

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.META./1028785192/info

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.logs

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.logs/slave1,60020,1333159627316

-rw-r--r-- 3 hadoop supergroup 293 2012-03-31 10:07 /hbase/.logs/slave1,60020,1333159627316/slave1%2C60020%2C1333159627316.1333159637444

-rw-r--r-- 3 hadoop supergroup 0 2012-03-31 10:07 /hbase/.logs/slave1,60020,1333159627316/slave1%2C60020%2C1333159627316.1333159637904

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:07 /hbase/.logs/slave2,60020,1333159627438

-rw-r--r-- 3 hadoop supergroup 0 2012-03-31 10:07 /hbase/.logs/slave2,60020,1333159627438/slave2%2C60020%2C1333159627438.1333159638583

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:18 /hbase/.oldlogs

-rw-r--r-- 2 hadoop supergroup 38 2012-03-31 10:07 /hbase/hbase.id

-rw-r--r-- 2 hadoop supergroup 3 2012-03-31 10:07 /hbase/hbase.version

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:22 /hbase/test

-rw-r--r-- 2 hadoop supergroup 513 2012-03-31 10:22 /hbase/test/.tableinfo.0000000001

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:22 /hbase/test/.tmp

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:22 /hbase/test/929f7e1caca5825974e0e991543fe2c5

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:22 /hbase/test/929f7e1caca5825974e0e991543fe2c5/.oldlogs

-rw-r--r-- 2 hadoop supergroup 124 2012-03-31 10:22 /hbase/test/929f7e1caca5825974e0e991543fe2c5/.oldlogs/hlog.1333160541983

-rw-r--r-- 2 hadoop supergroup 219 2012-03-31 10:22 /hbase/test/929f7e1caca5825974e0e991543fe2c5/.regioninfo

drwxr-xr-x - hadoop supergroup 0 2012-03-31 10:22 /hbase/test/929f7e1caca5825974e0e991543fe2c5/data

Error

==========================================

slave1: java.io.IOException: Could not find my address: db1 in list of ZooKeeper quorum servers

slave1: at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.writeMyID(HQuorumPeer.java:133)

slave1: at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:60)

Reason:hostname是db1,但是我配置的名字是slave1,但是是同一个IP.HBase会用hostname取得的主机名来方向方向解析DNS

Solution:

修改hostname为slave1重新启动server

声明: 此文观点不代表本站立场;转载须要保留原文链接;版权疑问请联系我们。