通過(關(guān)系型是statement和sql進(jìn)行增刪)啥進(jìn)行增刪改查的?是api提供類或方法操作HMaster來進(jìn)行的嗎?類似PreparedStatement(貌似是jdbc api提供)編譯sql進(jìn)行操作?有沒有類似mysql數(shù)據(jù)庫進(jìn)行建表等操作啊,也就是數(shù)據(jù)庫詳細(xì)設(shè)計(jì)階段中比如建表索引等是通過后臺(tái)的什么東西操作的(mysql等關(guān)系型是通過sql在后臺(tái)操作)?通過數(shù)據(jù)庫后臺(tái)查詢一條記錄是通過什么工具(關(guān)系型是第三方工具或者后臺(tái)的sql界面)以怎么的方式(關(guān)系型是表格)進(jìn)行顯示的???講講用HBase具體怎么搭建分布式數(shù)據(jù)環(huán)境啊,不是安裝HBase,類似與在mysql數(shù)據(jù)庫中創(chuàng)建一個(gè)具體的數(shù)據(jù)庫,其他含有表格等,也就是一個(gè)完整的數(shù)據(jù)庫出來。(學(xué)習(xí)中,問題很多很模糊)
1 回答

手掌心
TA貢獻(xiàn)1942條經(jīng)驗(yàn) 獲得超3個(gè)贊
給你一個(gè)類的代碼,你看看就知道怎么連接的了 import java.io.IOException; import java.util.Map; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.hbase.HBaseConfiguration; import org.apache.hadoop.hbase.HColumnDescriptor; import org.apache.hadoop.hbase.HTableDescriptor; import org.apache.hadoop.hbase.client.HBaseAdmin; import org.apache.hadoop.hbase.client.HTable; import org.apache.hadoop.hbase.client.Put; import org.apache.hadoop.hbase.client.Result; public class Htable { public static void main(String[] args) throws IOException { // Configuration hbaseConf = HBaseConfiguration.create(); Configuration HBASE_CONFIG = new Configuration(); //與hbase/conf/hbase-site.xml中hbase.master配置的值相同 HBASE_CONFIG.set( "hbase.master" , "9.186.89.27:60000" ); //與hbase/conf/hbase-site.xml中hbase.zookeeper.quorum配置的值相同 HBASE_CONFIG.set( "hbase.zookeeper.quorum" , "9.186.89.27,9.186.89.29,9.186.89.31,9.186.89.33,9.186.89.34" ); //與hbase/conf/hbase-site.xml中hbase.zookeeper.property.clientPort配置的值相同 HBASE_CONFIG.set( "hbase.zookeeper.property.clientPort" , "2181" ); Configuration hbaseConf = HBaseConfiguration.create(HBASE_CONFIG); HBaseAdmin admin = new HBaseAdmin(hbaseConf); // set the name of table HTableDescriptor htableDescriptor = new HTableDescriptor( "test11" .getBytes()); // set the name of column clusters htableDescriptor.addFamily( new HColumnDescriptor( "cf1" )); if (admin.tableExists(htableDescriptor.getName())) { admin.disableTable(htableDescriptor.getName()); admin.deleteTable(htableDescriptor.getName()); } // create a table admin.createTable(htableDescriptor); // get instance of table. HTable table = new HTable(hbaseConf, "test11" ); // for is number of rows for ( int i = 0 ; i < 3 ; i++) { // the ith row Put putRow = new Put(( "row" + i).getBytes()); // set the name of column and value. putRow.add( "cf1" .getBytes(), (i+ "col1" ).getBytes(), (i+ "vaule1" ).getBytes()); putRow.add( "cf1" .getBytes(), (i+ "col2" ).getBytes(), (i+ "vaule2" ).getBytes()); putRow.add( "cf1" .getBytes(), (i+ "col3" ).getBytes(), (i+ "vaule3" ).getBytes()); table.put(putRow); } // get data of column clusters for (Result result : table.getScanner( "cf1" .getBytes())) { // get collection of result for (Map.Entry< byte [], byte []> entry : result.getFamilyMap( "cf1" .getBytes()).entrySet()) { String column = new String(entry.getKey()); String value = new String(entry.getValue()); System.out.println(column + "," + value); } } } } |
- 1 回答
- 0 關(guān)注
- 120 瀏覽
添加回答
舉報(bào)
0/150
提交
取消