添加Kerberos后,Flink任务的运行认证及各组件的认证

Kerberos安装配置

https://www.psvmc.cn/article/2022-11-08-bigdata-kerberos-centos.html

Java显示调试日志

1
-Dsun.security.krb5.debug=true -Djava.security.auth.login.config=/data/hadoop/etc/hadoop/hdfs.conf -Djava.security.krb5.conf=/etc/krb5.conf

服务器上测试

Flink任务认证

flink on yarn

1
2
3
4
flink run \
-yD security.kerberos.login.keytab=/data/tools/bigdata/kerberos/hdfs.keytab \
-yD security.kerberos.login.principal=hdfs/hadoop01@HADOOP.COM \
/root/yxzt-data-tcs-1.0-SNAPSHOT-jar-with-dependencies.jar -job /root/zjhome/task_service.json

认证原理

  1. flink程序启动,自动将keytab文件自动上传hdfs,由yarn管理,分发给每个executor缓存token,定时刷新。
  2. 基于以上原理,当自定义RichSinkFunction里需要是使用基于kerberos认证的组件时,不需要再做认证操作。
  3. 比如:hive、hbase、kudu等等,直接建立连接就可以访问

服务上用户认证

认证

1
kinit -kt /data/tools/bigdata/kerberos/hdfs.keytab hdfs/hadoop01@HADOOP.COM

查看认证状态

1
klist

Hive认证连接

在服务器上测试

1
hive

使用JDBC

之前

1
beeline -n hive -u jdbc:hive2://hadoop01:10000/default

注意一定要添加双引号,否则无效

配置中设置的

1
beeline -n hive -u "jdbc:hive2://hadoop01:10000/zdb;principal=hdfs/hadoop01@HADOOP.COM"

目前测试的只能使用配置文件中设置的用户才能连接。

查询

1
show databases;

Hbase连接

1
hbase shell

Phoenix Shell连接

启动 Zookeeper => HDFS => Yarn => HBase 后

1
sqlline.py hadoop01,hadoop02,hadoop03:2181

第一次启动比较慢,请耐心等待。

查询

1
2
!tables
!quit

Hive JDBC认证

需要两个文件

  • 配置文件krb5.conf

  • 认证文件krb5.keytab,一般由服务器生成后获取

放到resources目录下

Kerberos认证

指定krb5配置文件:krb5.conf,根据实际情况替换

认证文件:krb5.keytab,根据实际情况替换

认证用户:hive,根据实际情况修改

这里是通过将配置文件和认证文件拷贝到临时目录进行认证,可以根据需要指定固定目录认证

使用项目中的配置

认证方法KerberosAuth.java

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.UserGroupInformation;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.*;
import java.nio.file.Files;
import java.nio.file.Paths;

public class KerberosAuth {
private static final Logger log = LoggerFactory.getLogger(KerberosAuth.class);
// kerberos配置文件,从服务上获取
private static final String krbConfig = "krb5.conf";
// kerberos认证文件
private static final String krbKeytab = "psvmc.keytab";
// kerberos认证用户
private static final String principal = "psvmc/hadoop@HADOOP.COM";

public static void init() {
initkerberos();
}

public static void initkerberos() {
log.info("Kerberos 登陆验证");
try {
// java临时目录,window为C:\Users\登录用户\AppData\Local\Temp\,linux为/tmp,需要根据情况添加斜杠
String javaTempDir = System.getProperty("java.io.tmpdir");
String tempDir = Paths.get(javaTempDir, "krb_" + System.currentTimeMillis()).toString();
String configPath = getTempPath(tempDir, krbConfig);
String keytabPath = getTempPath(tempDir, krbKeytab);
log.error(configPath);
log.error(keytabPath);
System.setProperty("java.security.krb5.conf", configPath);//设置krb配置文件路径,注意一定要放在Configuration前面,不然不生效
Configuration conf = new Configuration();
conf.set("hadoop.security.authentication", "Kerberos");//设置认证模式Kerberos
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(principal, keytabPath);//设置认证用户和krb认证文件路径
log.error("Kerberos 验证成功");
} catch (Exception e) {
log.error("Kerberos 验证失败", e);
}
}

/**
* 复制文件并根据文件名称获取文件路径(解决jar包不支持获取resource下文件问题)
*
* @param tempPath 临时目录
* @param fileName 文件名称
* @return 文件临时路径
*/
@SuppressWarnings("ResultOfMethodCallIgnored")
public static String getTempPath(String tempPath, String fileName) {
InputStream in = KerberosAuth.class.getResourceAsStream("/" + fileName);
String pathAll = tempPath + File.separator + fileName;
File file = new File(pathAll);
File tempPathFile = new File(tempPath);
if (!tempPathFile.exists()) {
tempPathFile.mkdirs();
}
try {
copyInputStreamToFile(in, pathAll);
} catch (Exception e) {
log.error("getTempPath", e);
}
return file.getPath();
}

private static void copyInputStreamToFile(InputStream is, String strFileFullPath) throws IOException {
long size = 0;
BufferedInputStream in = new BufferedInputStream(is);
BufferedOutputStream out = new BufferedOutputStream(Files.newOutputStream(Paths.get(strFileFullPath)));
int len = -1;
byte[] b = new byte[1024];
while ((len = in.read(b)) != -1) {
out.write(b, 0, len);
size += len;
}
in.close();
out.close();
//修改文件的访问权限
changeFolderPermission(strFileFullPath);
}

private static void changeFolderPermission(String dirPath) {
File dirFile = new File(dirPath);
dirFile.setReadable(true, false);
dirFile.setExecutable(true, false);
dirFile.setWritable(true, false);
}

public static void main(String[] args) {
KerberosAuth.init();
}
}

使用服务器上的配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.security.UserGroupInformation;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.sql.*;

public class KerberosAuthServer {
private static final Logger log = LoggerFactory.getLogger(KerberosAuthServer.class);

public static boolean initkerberos(String principal, String keytabPath) {
log.info("Kerberos 登陆验证");
try {
log.error(principal);
log.error(keytabPath);
Configuration conf = new Configuration();
conf.set("hadoop.security.authentication", "Kerberos");//设置认证模式Kerberos
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(principal, keytabPath);//设置认证用户和krb认证文件路径
log.error("Kerberos 验证成功");
return true;
} catch (Exception e) {
log.error("Kerberos 验证失败", e);
return false;
}
}


private static void connectHive() throws SQLException, ClassNotFoundException {
Class.forName("org.apache.hive.jdbc.HiveDriver");
Connection connection = DriverManager
.getConnection("jdbc:hive2://hadoop01:10000/zdb;principal=hdfs/hadoop01@HADOOP.COM");
PreparedStatement ps = connection.prepareStatement("show databases");
ResultSet rs = ps.executeQuery();
while (rs.next()) {
System.out.println(rs.getString(1));
}
rs.close();
ps.close();
connection.close();
}


public static void main(String[] args) throws SQLException, ClassNotFoundException {
boolean isAuth = KerberosAuthServer.initkerberos("hdfs/hadoop01@HADOOP.COM", "/data/tools/bigdata/kerberos/hdfs.keytab");
if (isAuth) {
connectHive();
}
}
}

JDBC工具类

Hive中配置Kerberos认证后,JDBC连接要进行kerberos认证。

认证后JDBC的URL也要添加认证相关的配置

如下

1
jdbc:hive2://192.168.7.101:10000/zdb;principal=psvmc/hadoop@HADOOP.COM

其中

principal:

  • hive 用户名

  • hostname:主机名,也可以理解为组

  • PSVMC.CN:realms和krb5.conf文件里一致即可

工具类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
import com.gientech.schedule.config.KerberosConnect;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.sql.*;
import java.util.*;

public class HiveUtils {
private static Logger logger = LoggerFactory.getLogger(HiveUtils.class.getName());

private static String driverName = "org.apache.hive.jdbc.HiveDriver";
private static String url = "jdbc:hive2://192.168.7.101:10000/zdb;principal=psvmc/hadoop@HADOOP.COM";//端口默认10000

/**
* 获取Connection
* @return conn
* @throws SQLException
* @throws ClassNotFoundException
*/

public static Connection getConnection() throws SQLException {
Connection conn = null;
try {
KerberosAuth.init();
conn = DriverManager.getConnection(url);
} catch (SQLException e) {
logger.info("获取数据库连接失败!");
throw e;
}
return conn;
}

// 创建数据库
public static void createDatabase(String databaseName) throws Exception {
String sql = "create database "+databaseName;
logger.info("Running: " + sql);
Connection conn = getConnection();
Statement stmt = conn.createStatement();
stmt.execute(sql);
closeConnection(conn);
closeStatement(stmt);
}

// 查询所有数据库
public static void showDatabases() throws Exception {
String sql = "show databases";
logger.info("Running: " + sql);
Connection conn = getConnection();
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(sql);
while (rs.next()) {
logger.info(rs.getString(1));
}
closeConnection(rs,stmt,conn);
}

/**
* 创建表(分割符为“,”)
* 如create table tableName(name string,sex string) row format delimited fields terminated by ','
* @param sql
* @throws Exception
*/
public static void createTable(String sql) throws Exception {
logger.info("Running: " + sql);
Connection conn = getConnection();
Statement stmt = conn.createStatement();
stmt.execute(sql);
closeConnection(conn);
closeStatement(stmt);
}

// 查询所有表
public static void showTables() throws Exception {
String sql = "show tables";
logger.info("Running: " + sql);
getConnection();
Connection conn = getConnection();
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(sql);
while (rs.next()) {
logger.info(rs.getString(1));
}
closeConnection(rs,stmt,conn);
}

// 查看表结构
public static void descTable(String tableName) throws Exception {
String sql = "desc formatted "+tableName;
logger.info("Running: " + sql);
Connection conn = getConnection();
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(sql);
while (rs.next()) {
logger.info(rs.getString(1) + "\t" + rs.getString(2));
}
closeConnection(rs,stmt,conn);
}

// 加载数据(请确保文件权限)
public static void loadData(String filePath,String tableName) throws Exception {
String sql = "load data inpath '" + filePath + "' into table tableName";
logger.info("Running: " + sql);
Connection conn = getConnection();
Statement stmt = conn.createStatement();
stmt.execute(sql);
closeConnection(conn);
closeStatement(stmt);
}

// 查询数据
public static void selectData(String sql) throws Exception {
logger.info("Running: " + sql);
Connection conn = getConnection();
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(sql);
rs = stmt.executeQuery(sql);
while (rs.next()) {
logger.info(rs.getString(1));
}
closeConnection(rs,stmt,conn);
}

// 删除数据库
public static void dropDatabase(String databaseName) throws Exception {
String sql = "drop database if exists "+databaseName;
logger.info("Running: " + sql);
Connection conn = getConnection();
Statement stmt = conn.createStatement();
stmt.execute(sql);
closeConnection(conn);
closeStatement(stmt);
}

// 删除数据库表
public static void deopTable(String tableName) throws Exception {
String sql = "drop table if exists "+tableName;
logger.info("Running: " + sql);
Connection conn = getConnection();
Statement stmt = conn.createStatement();
stmt.execute(sql);
closeConnection(conn);
closeStatement(stmt);
}


public static Map<String,Object> queryMapBySql(String sql){
//定义数据库连接
Connection conn = null;
//定义PreparedStatement对象
PreparedStatement ps = null;
//定义查询的结果集
ResultSet rs = null;
try {
conn = getConnection();
//定义执行的sql语句
ps = conn.prepareStatement(sql);
rs = ps.executeQuery();
return getMapFromResultSet(rs);
} catch (Exception e) {
logger.info("queryDataListBySql"+e.getMessage());
}finally {
closeConnection(rs,ps,conn);
}
return Collections.emptyMap();
}

/**
* 关闭ResultSet、Statement、Connection
*
* @param rs
* @param stmt
* @param con
*/

public static void closeConnection(ResultSet rs, Statement stmt, Connection con) {
closeResultSet(rs);
closeStatement(stmt);
closeConnection(con);
}

/**
* 关闭ResultSet
*
* @param rs
*/

public static void closeResultSet(ResultSet rs) {
if (rs != null) {
try {
rs.close();
} catch (SQLException e) {
logger.info(e.getMessage());
}
}
}

/**
* 关闭Statement
*
* @param stmt
*/

public static void closeStatement(Statement stmt) {
if (stmt != null) {
try {
stmt.close();
} catch (Exception e) {
logger.info(e.getMessage());
}
}
}

/**
* 关闭Connection
*
* @param con
*/

public static void closeConnection(Connection con) {
if (con != null) {
try {
con.close();
} catch (Exception e) {
logger.info(e.getMessage());
}
}
}

/**
* 将resultset结果转为sonObject
* @param rs ResultSet
* @return List
* @throws SQLException 异常
*/
public static Map<String,Object> getMapFromResultSet(ResultSet rs)
throws SQLException {
Map<String,Object> hm = new HashMap();
ResultSetMetaData rsmd = rs.getMetaData();
int count = rsmd.getColumnCount();// 获取列的数量
while(rs.next()) {
for (int i = 1; i <= count; i++) {
String key = rsmd.getColumnLabel(i);
Object value = rs.getObject(i);
hm.put(key, value);
}
}
return hm;
}

public static List<Map<String,Object>> queryListBySql(String sql){
//定义数据库连接
Connection conn = null;
//定义PreparedStatement对象
PreparedStatement ps = null;
//定义查询的结果集
ResultSet rs = null;
try {
conn = getConnection();
//定义执行的sql语句
ps = conn.prepareStatement(sql);
rs = ps.executeQuery();
return getListFromResultSet(rs);
} catch (Exception e) {
logger.info("queryDataListBySql"+e.getMessage());
}finally {
closeConnection(rs,ps,conn);
}
return Collections.emptyList();
}

/**
* 将resultset结果转为list
* @param rs ResultSet
* @return List
* @throws SQLException 异常
*/
private static List<Map<String,Object>> getListFromResultSet(ResultSet rs)
throws SQLException {
List<Map<String,Object>> results= new ArrayList<>();//结果数据
ResultSetMetaData metaData = rs.getMetaData(); // 获得列的结果
List<String> colNameList= new ArrayList<>();
int cols_len = metaData.getColumnCount(); // 获取总的列数
for (int i = 0; i < cols_len; i++) {
colNameList.add(metaData.getColumnName(i+1));
}
while (rs.next()) {
Map<String, Object> map= new HashMap<>();
for(int i=0;i<cols_len;i++){
String key=colNameList.get(i);
Object value=rs.getString(colNameList.get(i));
map.put(key, value);
}
results.add(map);
}
return results;
}

public static void main(String[] args) throws Exception {
String sql = "SELECT * FROM `t1` LIMIT 1";
List<Map<String, Object>> maps = queryListBySql(sql);
logger.info(maps.toString());
}
}

工具类

异常类

1
2
3
4
5
public class ZRuntimeException extends RuntimeException {
public ZRuntimeException(String format, Object... objs) {
super(String.format(format, objs));
}
}

读取配置类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.yarn.conf.YarnConfiguration;

public class ZLoadConfig {
public static String getHadoopConfigRootPath() {
String conf = System.getenv("HADOOP_CONF_DIR");
if (conf == null) {
String hh = System.getenv("HADOOP_HOME");
if (hh == null) {
throw new ZRuntimeException("找不到配置文件");
}
conf = hh + "/etc/hadoop";
}
return conf;
}

public static String getZKConfigRootPath() {
String conf = System.getenv("ZK_HOME");
if (conf != null) {
conf += "/conf";
}
return conf;
}

public static String getHbaseConfigRootPath() {
String conf = System.getenv("HBASE_HOME");
if (conf != null) {
conf += "/conf";
}
return conf;
}

public static Configuration loadHDFS() throws ZRuntimeException {
String conf = getHadoopConfigRootPath();
Configuration config = new Configuration();
config.addResource(new Path(conf + "/core-site.xml"));
config.addResource(new Path(conf + "/hdfs-site.xml"));
return config;
}

public static YarnConfiguration loadYarn() throws ZRuntimeException {
String conf = getHadoopConfigRootPath();
YarnConfiguration config = new YarnConfiguration();
config.addResource(new Path(conf + "/core-site.xml"));
config.addResource(new Path(conf + "/hdfs-site.xml"));
config.addResource(new Path(conf + "/yarn-site.xml"));
return config;
}

public static Configuration loadHbase() {
String hadoopConfPath = getHadoopConfigRootPath();
String hbaseConfPath = getHbaseConfigRootPath();
Configuration config = HBaseConfiguration.create();
config.addResource(new Path(hadoopConfPath + "/core-site.xml"));
config.addResource(new Path(hadoopConfPath + "/hdfs-site.xml"));
config.addResource(new Path(hadoopConfPath + "/yarn-site.xml"));
config.addResource(new Path(hbaseConfPath += "/hbase-site.xml"));
return config;
}
}

测试类

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
import java.io.IOException;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
import java.util.List;
import java.util.Properties;

import javax.security.auth.login.LoginContext;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileStatus;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.hbase.NamespaceDescriptor;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.security.UserGroupInformation;
import org.apache.hadoop.yarn.api.records.ApplicationReport;
import org.apache.hadoop.yarn.client.api.YarnClient;
import org.apache.hadoop.yarn.conf.YarnConfiguration;
import org.apache.hadoop.yarn.exceptions.YarnException;
import org.apache.phoenix.jdbc.PhoenixConnection;
import org.apache.zookeeper.WatchedEvent;
import org.apache.zookeeper.Watcher;
import org.apache.zookeeper.ZooKeeper;


public class ZKerberosUtil {


public static boolean initkerberos(String principal, String keytabPath) {
try {
Configuration conf = new Configuration();
conf.set("hadoop.security.authentication", "Kerberos");//设置认证模式Kerberos
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(principal, keytabPath);//设置认证用户和krb认证文件路径
System.out.println("Kerberos 验证成功");
return true;
} catch (Exception e) {
System.out.println("Kerberos 验证失败" + e.getMessage());
return false;
}
}

public static void testZK() {
try {
// 设置身份认证服务配置
System.setProperty("java.security.auth.login.config", ZLoadConfig.getZKConfigRootPath() + "/jaas.conf");
// 使用身份认证服务配置中的模块登录
LoginContext context = new LoginContext("Client");
context.login();
// 创建zk客户端并创建watcher
ZooKeeper zk = new ZooKeeper("hadoop01:2181", 5000, new Watcher() {
@Override
public void process(WatchedEvent event) {
System.out.println(event.getPath() + " : " + event.getState().toString());
}

});
while (zk.getState() != ZooKeeper.States.CONNECTED) {
Thread.sleep(500);
}
System.out.println("连接到zk");
List<String> ss = zk.getChildren("/", true);
ss.forEach((s) -> {
System.out.println(s);
});
zk.close();
context.logout();
} catch (Exception e) {
// 处理异常
e.printStackTrace();
}
}

public static void testHdfs(String principal, String keytabPath) throws IOException {
Configuration conf = ZLoadConfig.loadHDFS();
System.out.println(conf.get("fs.defaultFS"));
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(principal, keytabPath);
FileSystem fs = FileSystem.get(conf);
FileStatus[] fsStatus = fs.listStatus(new Path("/"));
for (FileStatus st : fsStatus) {
System.out.println(st.getPath());
}
}


public static void testYarn(String principal, String keytabPath) throws IOException, YarnException {
YarnConfiguration conf = ZLoadConfig.loadYarn();
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(principal, keytabPath);
YarnClient yc = YarnClient.createYarnClient();
yc.init(conf);
yc.start();

List<ApplicationReport> applications = yc.getApplications();
applications.forEach((a) -> {
System.out.println(a.getApplicationId());
});
yc.close();
}

public static void testHive(String principal, String keytabPath) throws SQLException, IOException, ClassNotFoundException {
Configuration conf = ZLoadConfig.loadHDFS();
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(
principal,
keytabPath
);
Class.forName("org.apache.hive.jdbc.HiveDriver");
Connection connection = DriverManager
.getConnection("jdbc:hive2://hadoop01:10000/yxdp_ys;principal=hdfs/hadoop01@HADOOP.COM");
PreparedStatement ps = connection.prepareStatement("show databases");
ResultSet rs = ps.executeQuery();
while (rs.next()) {
System.out.println(rs.getString(1));
}
rs.close();
ps.close();
connection.close();
}

public static void testHbase(String principal, String keytabPath) throws IOException, SQLException {
Configuration conf = ZLoadConfig.loadHbase();
UserGroupInformation.setConfiguration(conf);
UserGroupInformation.loginUserFromKeytab(
principal,
keytabPath
);
org.apache.hadoop.hbase.client.Connection con = ConnectionFactory.createConnection(conf);
NamespaceDescriptor[] nds = con.getAdmin().listNamespaceDescriptors();
for (NamespaceDescriptor nd : nds) {
System.out.println(nd.getName());
}
}

public static void testPhoenix(String principal, String keytabPath) throws IOException, ClassNotFoundException, SQLException {
Configuration conf = ZLoadConfig.loadHbase();
Properties prop = new Properties();
conf.getValByRegex(".*?").forEach(prop::setProperty);
Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");

// kerberos环境下Phoenix的jdbc字符串为 jdbc:phoenix:zk:2181:/znode:principal:keytab
String url = "jdbc:phoenix:hadoop01,hadoop02,hadoop03:/hbase:" + principal + ":" + keytabPath;
PhoenixConnection con = DriverManager.getConnection(url, prop).unwrap(PhoenixConnection.class);
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery("SELECT * FROM ZDB.TUSER");
int n = rs.getMetaData().getColumnCount();
for (int i = 0; i < n; i++) {
String cn = rs.getMetaData().getColumnName(i + 1);
System.out.println(cn);
}
rs.close();
stmt.close();
con.close();
}
}

测试Phoenix

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
public static void testPhoenix(String principal, String keytabPath) throws IOException, ClassNotFoundException, SQLException {
Configuration conf = ZLoadConfig.loadHbase();
Properties prop = new Properties();
conf.getValByRegex(".*?").forEach(prop::setProperty);
Class.forName("org.apache.phoenix.jdbc.PhoenixDriver");

// kerberos环境下Phoenix的jdbc字符串为 jdbc:phoenix:zk:2181:/znode:principal:keytab
String url = "jdbc:phoenix:hadoop01,hadoop02,hadoop03:/hbase:" + principal + ":" + keytabPath;
PhoenixConnection con = DriverManager.getConnection(url, prop).unwrap(PhoenixConnection.class);
Statement stmt = con.createStatement();


stmt.execute("upsert into yxdp_mx.MGT_SECOND_TRANSFER27501609(id_,code_,is_closed_,day_traffic_num_,fk_area_grid_id_,fk_area_grid_name_,fk_clean_area_id_,fk_clean_area_name_,fk_manage_person_id_,fk_transfer_station_id_,rfid_code_,photo_url_,status_,arrange_time_,operate_time_,longitude_,latitude_,second_car_type_,second_car_name_,del_flag_,create_by_,create_time_,update_by_,update_time_,yxdp_origin_task,yxdp_origin_table,yxdp_process_time,yxdp_task_instance,yxdp_id) VALUES ('1639218834574876674','北办078',1,'2','1597852749031333889','北下街街道','1818124535238850048,1844423858199840000,1601033133454870016','星月小区,管城后街1号,清真寺街58号','1639218430319468546','1638727332944564226','E28069950000500662D4667A','1649231643853352961',2,'8:00 - 22:00','8:00 - 22:00','113.678966','34.756267','1','厨余垃圾车',0,'1','2023-03-24T18:53:20','1','2023-07-26T09:33:30','1691762963500257281','1691739022249447426','2023-08-18 13:07:12','1691762963500257281_20230817205803','4938677a-a7cd-4bc5-a74b-f132d6697f08')");
con.commit();

ResultSet rs = stmt.executeQuery("SELECT * FROM yxdp_mx.MGT_SECOND_TRANSFER27501609");
int n = rs.getMetaData().getColumnCount();

while (rs.next()) {
Map<String, String> rowMap = new HashMap<>();
for (int i = 0; i < n; i++) {
String cn = rs.getString(i + 1);
rowMap.put(rs.getMetaData().getColumnName(i + 1), cn);
}
System.out.println(JSON.toJSONString(rowMap));
}
rs.close();
stmt.close();
con.close();
}

总结

首先在配置的时候注意一下几个环境变量的设置

  • ZK_HOME
  • HADOOP_HOME
  • HBASE_HOME

ZK直接使用的配置文件认证。

jaas.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/data/tools/bigdata/kerberos/hdfs.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/hadoop01@HADOOP.COM";
};

Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/data/tools/bigdata/kerberos/hdfs.keytab"
storeKey=true
useTicketCache=false
principal="cli@HADOOP.COM";
};

HDFS、YARN、Hive、Hbase都使用Hadoop的认证。

Phoenix使用胖客户端形式,自身认证。

概念

Principal 和 Ticket的关系?

Principal 和 Ticket 是 Kerberos 中的两个概念,它们之间存在一定的关系。

  1. Principal(主体):Principal 是 Kerberos 中标识用户或服务的实体。它通常是一个唯一的标识符,如用户名、服务名或主机名,用于识别实体的身份。

  2. Ticket(票据):Ticket 是 Kerberos 认证系统中的凭证,用于表示一个已经通过身份验证的实体拥有权限访问某个服务。Ticket 包含了用户或服务的身份信息以及被授权的服务和权限。

Principal 可以被转换成 Ticket,这是通过 Kerberos 的身份验证过程实现的。当一个用户进行 Kerberos 认证成功后,会得到一个包含用户身份信息和访问权限的 Ticket。这个 Ticket 在用户访问受保护资源的时候,将被用于证明用户的身份和授权。

因此,Principal 是用户或服务的标识,而 Ticket 是用户经过 Kerberos 认证后获得的具有访问权限的凭证。Ticket 将被用于用户与服务之间的通信,以验证用户的身份并授权其访问相应的资源。

相同用户同一个Principal多次登录会产生不同的Ticket吗?

是的,同一个 Principal 多次登录会产生不同的 Ticket。

在 Kerberos 认证中,Ticket 是基于时间戳和会话标识生成的,并且每个票据都是独立的。

每次用户进行 Kerberos 认证时,会生成一个新的票据,该票据包含了用户的身份信息和访问权限。

即使是同一个 Principal,不同的登录会话也会生成独立的票据。

这样做的目的是增加安全性和防止重放攻击。

这意味着每次用户登录后,会获得一个新的 Ticket,并且旧的 Ticket 将会失效

新生成的 Ticket 将会在会话期限内(通常为一段时间)保持有效,并用于访问受保护的资源。

每个 Ticket 都有自己的唯一标识和有效期限,在过期后将不再被接受。

因此,同一个 Principal 多次登录会生成不同的 Ticket,每个 Ticket 对应一个独立的登录会话,并拥有自己的有效期限和访问权限。

注意

当用户进行新的登录并获得新的票据后,旧的票据将会失效,不再被服务器接受。

这样做是为了保证安全性,确保只有最新的有效票据被使用来授权用户访问资源。

测试

认证

1
kinit -kt /data/kerberos/kerberos.keytab hdfs/hdp01@HADOOP.COM

查看认证状态

1
klist