深入理解 kafka(一)

前言

项目最近 kafka 又崩了…

心态炸了…

总是 nobroker,自己也总是没有彻底解决这个问题,所以,受不了了,我要开始深入理解一下 kafka 了…

由于之前一篇文章都是几万字,导致可读性比较拉胯,所以从现在开始,我每一章出一篇文章…这样可读性强一些…

第一章 初识 kafka

Kafka起初是由LinkedIn公司采用Scala语言开发的一个多分区、多副本且基于ZooKeeper协调的分布式消息系统,现已被捐献给Apache基金会。目前Kafka已经定位为一个分布式流式处理平台,它以高吞吐、可持久化、可水平扩展、支持流数据处理等多种特性而被广泛使用。

Kafka主要扮演三大角色:

  1. 消息系统。提供传统的消息中间件的功能:系统解耦、冗余存储、流量削峰、异步通信等,同时还提供了消息的顺序性保障以及回溯消费的功能。
  2. 存储系统。Kafka有多副本机制,所以可以用来当数据存储系统使用。
  3. 流式处理平台。为每个流行的流式处理框架提供了可靠的数据来源,还提供了一个完整的流式处理类库。

基本概念

组成

  1. Producer。生产者,负责创建消息发送到 broker 中。
  2. Consumer。从 broker 中接收消息。
  3. Zookeeper。负责集群元数据的管理、控制器的选举等等。
  4. Broker。负责将收到的消息存储到磁盘中,可以是一个集群。

重要的概念

Topic 和 Partition

Kafka中的消息以 topic 为单位进行归类,生产者负责将消息发送到特定的主题,而消费者负责订阅主题并进行消费。

topic 仅仅是一个逻辑上的概念,实际的实现还是分区,一个主题可以有多个分区,每个分区都可以分散到不同 broker 中,分区在存储层面可以看作是一个可追加的 log 文件,而 offset 是消息在分区中的唯一标识,Kafka 通过它来保证分区内的消息的顺序性,所以 Kafka 仅保证分区有序,不保证主题有序。

多副本机制

Kafka 为分区引入了多副本机制,保证高可用。在这里和 Mysql 一样,同样是一主多从,所以关键就是主从同步的问题,同时还会涉及到主备切换、主从延迟等通用问题。

分区中所有副本统称为 AR(Assigned Replicas),AR = ISR(In-Sync Replicas) + OSR(Out-of-Sync Replicas),当 leader副本发生故障时,只有在 ISR 中的 副本才有资格被选取为新的 leader。

在读取 offset 的过程中,也引入了 HW(高水位) 和 LEO(Log End Offset)的概念,在 Mysql 中实现 mvcc 中,也同样采用了 高水位的概念,不过在那里是在事务的视图中出现的。这里的 HW 和 LEO 同样让我想到了 NIO 中的 buffer 也是拥有同样的概念的。

Kafka 的复制机制既不是完全的同步复制,也不是单纯的异步复制。这里我们可以类比事务处理。同步意味着所有都完成才提交事务「即要求所有 follower 副本全部复制完才可以进行读取消息」,而异步则意味着只要leader读入成功则直接提交事务「Mysql 是异步复制」。前者影响效率,后者则存在不可靠性,因为一旦leader宕机而follower没有同步完成,数据就会出现问题。所以这里采用折中的方案,只要同步完 ISR 集合中的副本就提交事务。

安装和配置

博主使用的 macOS 系统,所以接下来对在 macOS 下如何安装运行 kafka 做一个详细的叙述。

JDK 的安装和配置

Kafka是建立在 jvm 之上的,所以需要先安装 jvm,这个很简单啦…

https://www.jianshu.com/p/194531d106ae

  1. 下载
  2. 配环境
  3. 查看配置是否成功

这里我不详细叙述,直接看该网址就行,下面的我会详细叙述。

Zookeeper 的安装和配置

首先再简单介绍一下 Zookeeper,其实 kafka 现在已经内置了 Zookeeper,这里我们就暂时不使用内置的 zk 了,因为内置的只能是单机版,但是在生产环境中 zk 一般都是集群,所以这里我们再单独进行 zk 的安装和配置。

ZooKeeper是一个开源的分布式协调服务,是Google Chubby的一个开源实现。分布式应用程序可以基于ZooKeeper实现诸如数据发布/订阅、负载均衡、命名服务、分布式协调/通知、集群管理、Master选举、配置维护等功能。在ZooKeeper中共有3个角色:leader、follower和observer,同一时刻 ZooKeeper集群中只会有一个leader,其他的都是follower和observer。observer不参与投票,默认情况下 ZooKeeper 中只有 leader 和 follower 两个角色。

下载 Zookeeper

brew install zookeeper

这里需要注意的是,下载之后,Zookeeper 中的 bin 目录和配置文件不在一块

bin 目录的位置:”/usr/local/Cellar/zookeeper/3.4.14”

配置文件的位置:”/usr/local/etc/zookeeper”

Zookeeper 的 启动停止日志:/usr/local/var/log/zookeeper

配环境

  1. 命令:open -e .bash_profile

  1. 1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    MONGODB_HOME=/usr/local/mongodb

    KAFKA_HOME=/usr/local/Cellar/kafka/2.4.0

    ZOOKEEPER_HOME=/usr/local/Cellar/zookeeper/3.4.14

    PATH="/Library/Frameworks/Python.framework/Versions/3.7/bin:$ZOOKEEPER_HOME/bin:$KAFKA_HOME/bin:$MONGODB_HOME/bin:${PATH}:"
    export PATH

    export PATH="/Users/yangweijie/anaconda3/bin:$PATH"


    export JAVA_HOME=/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home
    export CLASSPAHT=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
    export PATH=$JAVA_HOME/bin:$PATH:
  2. 命令:source .bash_profile

    让配置文件生效

修改 Zookeeper 配置文件

  1. 进入到 /usr/local/etc/zookeeper,然后将 zoo_sample.cfg 文件修改为 zoo.cfg

    cd /usr/local/etc/zookeeper
    cp zoo_sample.cfg zoo.cfg

  2. 修改 zoo.cfg 配置文件,主要是创建数据目录和日志目录,因为默认是没有的,我们需要自己创建

    open -e zoo.cfg
    mkdir -p tmp/data
    mkdir -p tmp/log

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# zoo.cfg 文件内容
# The number of milliseconds of each tick
# zookeeper 服务器心跳时间,单位为 ms
tickTime=2000

# The number of ticks that the initial
# synchronization phase can take
# 投票选举新leader的初始化时间
initLimit=10

# The number of ticks that can pass between
# sending a request and getting an acknowledgement
# leader与follower 心跳检测最大容忍时间,响应时间超过 syncLimit * tickTime,lead# er就认为 follower 死掉了,从服务器中删除 follower
syncLimit=5

# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
# 数据目录
dataDir=/usr/local/etc/zookeeper/tmp/data
# the port at which the clients will connect
# 对外服务端口
clientPort=2181

# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60

# 日志目录
dataLogDir=/usr/local/etc/zookeeper/tmp/log

#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

dataDir:”/usr/local/etc/zookeeper/tmp/data”

Log: “/usr/local/etc/zookeeper/tmp/log”

启动 Zookeeper

启动 zookeeper:

zkServer start

查看 zookeeper 状态:

zkServer status

这个其实看不出来 Zookeeper 是否真的运行起来了,我在下面的问题模块也有讲到,如果想看 Zookeeper 是否真的运行起来了,可以查看 “/usr/local/etc/zookeeper/tmp/data/zookeeper_server.pid” 是否存在,当 zk stop时该文件就会消失。

Kafka 的安装和配置

下载 kafka

brew install kafka

同样的,bin 和 配置文件位置不在一起。

bin 目录的位置:”/usr/local/Cellar/kafka/2.4.0”

配置文件的位置:”/usr/local/etc/kafka”

Kafka 的启动停止日志:/usr/local/var/log/kafka

配环境

见上面的 bash_profile

修改 Kafka 配置文件

将 /usr/local/etc/kafka/server.properties 进行相应的修改,主要是对日志文件的地址进行修改,然后修改完之后自己创建好相应的目录。

这里我的 kafka 的消息 log 是 /usr/local/etc/kafka/tmp/kafka-logs

启动 Kafka

  1. 启动 kafka

    1
    kafka-server-start /usr/local/etc/kafka/server.properties
  2. 常用指令:

创建 topic:

1
kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

查看 topic :

1
kafka-topics --list --zookeeper localhost:2181

发送消息:

1
kafka-console-producer --broker-list localhost:9092 --topic test

消费消息:

1
kafka-console-consumer --bootstrap-server localhost:9092 --topic test --from-beginning

查看 java 有关的进程:

1
jps -l

遇到的问题

Q1

问题

系统是 macOS,kafka 版本为 2.4.0,zookeeper 版本为3.4.14,在安装启动遇到这样一个问题:

用 “kafka-server-stop /usr/local/etc/kafka/server.properties” 无法杀死 kafka 进程…

使用完这个命令之后 用命令 “jps -l” 可以看到,原先的Kafka.kafka 对应的进程的确没了,但是莫名其妙又起了一个进程来运行 kafka,这是啥情况… 为啥会杀不死。

从后文可以知道,jps 实际上无法准备的确认 kafka服务 是否正常启动。

尝试解决方案

  1. 重写kafka-server-stop.sh 脚本
1
2
3
4
5
6
7
8
PIDS=$(ps -ef  |grep java|grep kafka | grep -v grep | awk '{print $2}')

for PID in $PIDS
do
kill -9 $PID
done

echo -e "Stop Finished!\n"

原 kafka-server-stop.sh 内容:

1
2
3
4
5
6
7
8
9
SIGNAL=${SIGNAL:-TERM}
PIDS=$(ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}')

if [ -z "$PIDS" ]; then
echo "No kafka server to stop"
exit 1
else
kill -s $SIGNAL $PIDS
fi
结果:无效。。。。
  1. 参考: https://blog.csdn.net/dengjili/article/details/95041267

    依旧是修改 kafka-server-stop.sh 的内容

1
PIDS=$(ps ax | grep -i ‘kafka.Kafka’ | grep java | grep -v grep | awk ‘{print $1}’)

修改为:

1
PIDS=$(jps -lm | grep -i 'kafka.Kafka' | awk '{print $1}')
结果:依旧失败。。。。。心态炸了
  1. 在解决第三个问题的时候发现,发现并不能仅仅通过 jps 去看 kafka 是否停止运行,因为其非常的不准确,其实 kafka 可能已经停止了,也就是说,”kafka-server-stop /usr/local/etc/kafka/server.properties” 其实已经杀死 kafka 服务进程,只是我们不知道而已。。。。

    我们可以通过生产消息:kafka-console-producer --broker-list localhost:9092 --topic test来判断 kafka 是否是正常启动的。

结果:已解决!!!

Q2

问题

使用命令 “zkServer stop” 无法停止 zookeeper,报错:

1
2
3
ZooKeeper JMX enabled by default
Using config: /usr/local/etc/zookeeper/zoo.cfg
Stopping zookeeper ... no zookeeper to stop (could not find file /usr/local/etc/zookeeper/tmp/data/zookeeper_server.pid)

尝试解决方案

原来是因为我根本没有把 Zookeeper 起来,使用 zkServer status并不能显示 zk 是否真的起来了…

在我再次启动 zk 时,的确在 “/usr/local/etc/zookeeper/tmp/data 下 出现了 zookeeper_server.pid” 这个文件。

所以说,想看 zk 到底是否真的起来了,可以查看 “/usr/local/etc/zookeeper/tmp/data/zookeeper_server.pid” 是否存在,当 zk stop时该文件就会消失。

结果:该问题解决!!!

Q3

问题

在我一直无法关闭 kafka 之后,我进行了电脑重启,重启之后,使用 “jps -l” 的确没有再出现 “kafka.Kafka”,说明这时 kafka 的确停止了,但是新的问题又出现了…

在使用 kafka-server-start /usr/local/etc/kafka/server.properties 命令之后,再也无法启动 kafka 了… 我太难了…

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[2020-04-24 23:16:58,980] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-04-24 23:16:59,391] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2020-04-24 23:16:59,392] INFO starting (kafka.server.KafkaServer)
[2020-04-24 23:16:59,392] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2020-04-24 23:16:59,406] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2020-04-24 23:16:59,410] INFO Client environment:zookeeper.version=3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,410] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,411] INFO Client environment:java.version=1.8.0_222 (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,411] INFO Client environment:java.vendor=AdoptOpenJDK (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,411] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,411] INFO Client environment:java.class.path=/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/activation-1.1.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/aopalliance-repackaged-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/argparse4j-0.7.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/commons-cli-1.4.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/commons-lang3-3.8.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-api-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-basic-auth-extension-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-file-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-json-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-mirror-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-mirror-client-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-runtime-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-transforms-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/guava-20.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/hk2-api-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/hk2-locator-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/hk2-utils-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-annotations-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-core-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-databind-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-dataformat-csv-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-datatype-jdk8-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-jaxrs-base-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-jaxrs-json-provider-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-module-jaxb-annotations-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-module-paranamer-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-module-scala_2.12-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.activation-api-1.2.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.annotation-api-1.3.4.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.inject-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/javax.ws.rs-api-2.1.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-client-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-common-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-container-servlet-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-container-servlet-core-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-hk2-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-media-jaxb-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-server-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-client-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-continuation-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-http-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-io-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-security-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-server-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-servlet-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-servlets-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-util-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-clients-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-log4j-appender-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-streams-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-streams-examples-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-streams-scala_2.12-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-streams-test-utils-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-tools-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka_2.12-2.4.0-sources.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka_2.12-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/log4j-1.2.17.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/lz4-java-1.6.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/maven-artifact-3.6.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/metrics-core-2.2.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-buffer-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-codec-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-common-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-handler-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-resolver-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-transport-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-transport-native-epoll-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-transport-native-unix-common-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/paranamer-2.8.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/plexus-utils-3.2.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/reflections-0.9.11.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/rocksdbjni-5.18.3.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-collection-compat_2.12-2.1.2.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-java8-compat_2.12-0.9.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-library-2.12.10.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-logging_2.12-3.9.2.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-reflect-2.12.10.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/slf4j-api-1.7.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/slf4j-log4j12-1.7.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/snappy-java-1.1.7.3.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/validation-api-2.0.1.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/zookeeper-3.5.6.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/zookeeper-jute-3.5.6.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/zstd-jni-1.4.3-1.jar (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,411] INFO Client environment:java.library.path=/Users/yangweijie/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,412] INFO Client environment:java.io.tmpdir=/var/folders/cp/p4lwr_6n66s065dhcw80dzd80000gn/T/ (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,412] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,412] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,412] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,412] INFO Client environment:os.version=10.15.4 (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,412] INFO Client environment:user.name=yangweijie (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,412] INFO Client environment:user.home=/Users/yangweijie (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,412] INFO Client environment:user.dir=/usr/local/etc/kafka/tmp (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,412] INFO Client environment:os.memory.free=978MB (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,412] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,412] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,414] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@7722c3c3 (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,417] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2020-04-24 23:16:59,422] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2020-04-24 23:16:59,426] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2020-04-24 23:16:59,428] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2020-04-24 23:16:59,430] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-24 23:16:59,440] INFO Socket connection established, initiating session, client: /0:0:0:0:0:0:0:1:51131, server: localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2020-04-24 23:16:59,444] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x10000053312004f, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2020-04-24 23:16:59,447] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2020-04-24 23:16:59,612] INFO Cluster ID = I5sBHS6MSMG4MmUddyviFQ (kafka.server.KafkaServer)
[2020-04-24 23:16:59,621] ERROR Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
kafka.common.InconsistentClusterIdException: The Cluster ID I5sBHS6MSMG4MmUddyviFQ doesn't match stored clusterId Some(Lz_cYIXrTryPr_06DLt6hQ) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.
at kafka.server.KafkaServer.startup(KafkaServer.scala:220)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)
[2020-04-24 23:16:59,623] INFO shutting down (kafka.server.KafkaServer)
[2020-04-24 23:16:59,625] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[2020-04-24 23:16:59,732] INFO Session: 0x10000053312004f closed (org.apache.zookeeper.ZooKeeper)
[2020-04-24 23:16:59,732] INFO EventThread shut down for session: 0x10000053312004f (org.apache.zookeeper.ClientCnxn)
[2020-04-24 23:16:59,734] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[2020-04-24 23:16:59,737] INFO shut down completed (kafka.server.KafkaServer)
[2020-04-24 23:16:59,737] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2020-04-24 23:16:59,739] INFO shutting down (kafka.server.KafkaServer)

通过 “jps -l” 的结果看,的确是没有启动起来。

1
2
3
yangweijieMacBook-Pro:tmp yangweijie$ jps -l
640 org.apache.zookeeper.server.quorum.QuorumPeerMain
30653 sun.tools.jps.Jps

然后我就去洗澡了.. 回来 “jps -l”,我擦,kafka 起来了,自己起来了???我的妈,受不了了…

1
2
3
4
yangweijieMacBook-Pro:~ yangweijie$ jps  -l
26240 sun.tools.jps.Jps
25920 kafka.Kafka
640 org.apache.zookeeper.server.quorum.QuorumPeerMain

然后我不信,我就试着去生产一个消息

1
kafka-console-producer --broker-list localhost:9092 --topic test

的确,报错了

1
WARN [Producer clientId=console-producer] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)

这我就想不通了… 这个 kafka.Kafka 难道不能代表 kafka 服务端活着?

然后我又去尝试的 “jps -l” ,妈的,那个进程又死了…

1
2
3
yangweijieMacBook-Pro:~ yangweijie$ jps -l
640 org.apache.zookeeper.server.quorum.QuorumPeerMain
32381 sun.tools.jps.Jps

为了证明我眼睛没花,我又尝试的一遍,妈的,又出现了一个新的 kafka 进程。。。

1
2
3
4
yangweijieMacBook-Pro:~ yangweijie$ jps -l
32704 sun.tools.jps.Jps
640 org.apache.zookeeper.server.quorum.QuorumPeerMain
32384 kafka.Kafka

然后我又试了一遍,kafka 进程又没了…

然后我接着试了很多遍,kafka进程一直没有… 这次,我决定再次启动一下 kafka 试试… 还是启动不了…

然后我无聊的测试下 “jsp -l” 这个命令,发现其执行 5-6 次就会有一次 “kafka.Kafka” 的进程产生,然后又会自己消失… 玄学??????崩了…

尝试解决方案

参考:https://blog.csdn.net/Sakitaf/article/details/104954268

我们回到第三个最初的问题,也就是一直启动不了 kafka,也就是上面的报错:

1
kafka.common.InconsistentClusterIdException: The Cluster ID I5sBHS6MSMG4MmUddyviFQ doesn't match stored clusterId Some(Lz_cYIXrTryPr_06DLt6hQ) in meta.properties. The broker is trying to join the wrong cluster. Configured zookeeper.connect may be wrong.

于是,我就尝试去找 meta.properties,原来这个 meta.properties 在/usr/local/etc/kafka/tmp/kafka-logs/,然后将其里面的 meta.properties 中的 Cluster ID 改为报错中需要的。这应该就是 2.4.0 的bug了,在我上次暴力关闭 kafka 时,不会主动的与新的zk实例建立连接。

结果:已解决!!!!

Q4

问题

最近想查一下 kafka 的启动日志,结果不查不知道,一查吓一跳,一个启动日志占了4个多g的存储…

看了下,全是启动日志,10分钟就更新一次…

吓得我赶紧删了,得研究一下如何不让他更新的这么频繁,我的256g的电脑瑟瑟发抖…

尝试解决方案

看了下,应该是以前把 Zookeeper 关闭了,但是 kafka 一直没关闭,然后 kafka 一直失败重连,日志疯狂增加…

再次尝试的使用命令 “kafka-server-stop /usr/local/etc/kafka/server.properties”,然后看 “/usr/local/var/log/kafka” 下的日志。

首先,的确有 shutdown,这是我搜索 shutdown 后过滤得到的日志,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
	controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
[2020-04-29 18:21:19,508] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
[2020-04-29 18:21:19,517] INFO [LogDirFailureHandler]: Shutdown completed (kafka.server.ReplicaManager$LogDirFailureHandler)
[2020-04-29 18:21:19,518] INFO [ReplicaFetcherManager on broker 0] shutdown completed (kafka.server.ReplicaFetcherManager)
[2020-04-29 18:21:19,519] INFO [ReplicaAlterLogDirsManager on broker 0] shutdown completed (kafka.server.ReplicaAlterLogDirsManager)
[2020-04-29 18:21:19,637] INFO [ExpirationReaper-0-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,911] INFO [ExpirationReaper-0-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,912] INFO [ExpirationReaper-0-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:20,187] INFO [ExpirationReaper-0-ElectLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:20,262] INFO Shutdown complete. (kafka.log.LogManager)
[2020-04-29 18:21:20,714] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:21,714] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:22,786] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:22,814] INFO [SocketServer brokerId=0] Shutdown completed (kafka.network.SocketServer)
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000

但是,使用 jps -l 看的确 kafka 进程还没死,收发消息也正常,所以我们再看一下完整的日志,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
[2020-04-29 18:21:17,883] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-04-29 18:21:18,347] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2020-04-29 18:21:18,348] INFO starting (kafka.server.KafkaServer)
[2020-04-29 18:21:18,348] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2020-04-29 18:21:18,364] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2020-04-29 18:21:18,370] INFO Client environment:zookeeper.version=3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,370] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,370] INFO Client environment:java.version=1.8.0_222 (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,370] INFO Client environment:java.vendor=AdoptOpenJDK (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,370] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,370] INFO Client environment:java.class.path=/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/activation-1.1.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/aopalliance-repackaged-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/argparse4j-0.7.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/commons-cli-1.4.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/commons-lang3-3.8.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-api-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-basic-auth-extension-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-file-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-json-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-mirror-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-mirror-client-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-runtime-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-transforms-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/guava-20.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/hk2-api-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/hk2-locator-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/hk2-utils-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-annotations-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-core-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-databind-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-dataformat-csv-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-datatype-jdk8-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-jaxrs-base-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-jaxrs-json-provider-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-module-jaxb-annotations-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-module-paranamer-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-module-scala_2.12-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.activation-api-1.2.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.annotation-api-1.3.4.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.inject-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/javax.ws.rs-api-2.1.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-client-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-common-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-container-servlet-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-container-servlet-core-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-hk2-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-media-jaxb-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-server-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-client-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-continuation-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-http-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-io-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-security-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-server-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-servlet-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-servlets-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-util-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-clients-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-log4j-appender-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-streams-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-streams-examples-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-streams-scala_2.12-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-streams-test-utils-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-tools-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka_2.12-2.4.0-sources.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka_2.12-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/log4j-1.2.17.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/lz4-java-1.6.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/maven-artifact-3.6.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/metrics-core-2.2.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-buffer-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-codec-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-common-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-handler-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-resolver-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-transport-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-transport-native-epoll-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-transport-native-unix-common-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/paranamer-2.8.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/plexus-utils-3.2.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/reflections-0.9.11.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/rocksdbjni-5.18.3.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-collection-compat_2.12-2.1.2.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-java8-compat_2.12-0.9.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-library-2.12.10.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-logging_2.12-3.9.2.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-reflect-2.12.10.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/slf4j-api-1.7.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/slf4j-log4j12-1.7.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/snappy-java-1.1.7.3.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/validation-api-2.0.1.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/zookeeper-3.5.6.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/zookeeper-jute-3.5.6.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/zstd-jni-1.4.3-1.jar (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,370] INFO Client environment:java.library.path=/Users/yangweijie/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,371] INFO Client environment:java.io.tmpdir=/var/folders/cp/p4lwr_6n66s065dhcw80dzd80000gn/T/ (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,371] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,371] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,371] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,371] INFO Client environment:os.version=10.15.4 (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,371] INFO Client environment:user.name=yangweijie (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,371] INFO Client environment:user.home=/Users/yangweijie (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,371] INFO Client environment:user.dir=/usr/local (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,371] INFO Client environment:os.memory.free=978MB (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,371] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,371] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,373] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@7722c3c3 (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:18,377] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2020-04-29 18:21:18,383] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2020-04-29 18:21:18,388] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2020-04-29 18:21:18,389] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2020-04-29 18:21:18,392] INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-29 18:21:18,403] INFO Socket connection established, initiating session, client: /127.0.0.1:56049, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn)
[2020-04-29 18:21:18,408] INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x100000533120320, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2020-04-29 18:21:18,411] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2020-04-29 18:21:18,600] INFO Cluster ID = I5sBHS6MSMG4MmUddyviFQ (kafka.server.KafkaServer)
[2020-04-29 18:21:18,659] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.4-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/etc/kafka/tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.4-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
security.providers = null
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2020-04-29 18:21:18,667] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.4-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/etc/kafka/tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.4-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
security.providers = null
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2020-04-29 18:21:18,688] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:18,688] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:18,688] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:18,721] INFO Loading logs. (kafka.log.LogManager)
[2020-04-29 18:21:18,768] INFO [Log partition=__consumer_offsets-9, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,770] INFO [Log partition=__consumer_offsets-9, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,802] INFO [Log partition=__consumer_offsets-9, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,804] INFO [Log partition=__consumer_offsets-9, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 60 ms (kafka.log.Log)
[2020-04-29 18:21:18,812] INFO [Log partition=__consumer_offsets-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,812] INFO [Log partition=__consumer_offsets-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,814] INFO [Log partition=__consumer_offsets-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,815] INFO [Log partition=__consumer_offsets-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,819] INFO [Log partition=__consumer_offsets-7, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,819] INFO [Log partition=__consumer_offsets-7, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,821] INFO [Log partition=__consumer_offsets-7, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,822] INFO [Log partition=__consumer_offsets-7, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2020-04-29 18:21:18,826] INFO [Log partition=__consumer_offsets-31, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,826] INFO [Log partition=__consumer_offsets-31, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,828] INFO [Log partition=__consumer_offsets-31, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,829] INFO [Log partition=__consumer_offsets-31, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,832] INFO [Log partition=__consumer_offsets-36, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,832] INFO [Log partition=__consumer_offsets-36, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,834] INFO [Log partition=__consumer_offsets-36, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,835] INFO [Log partition=__consumer_offsets-36, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,838] INFO [Log partition=__consumer_offsets-38, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,839] INFO [Log partition=__consumer_offsets-38, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,841] INFO [Log partition=__consumer_offsets-38, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,841] INFO [Log partition=__consumer_offsets-38, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,845] INFO [Log partition=__consumer_offsets-6, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,845] INFO [Log partition=__consumer_offsets-6, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,847] INFO [Log partition=__consumer_offsets-6, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,847] INFO [Log partition=__consumer_offsets-6, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,850] INFO [Log partition=__consumer_offsets-1, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,851] INFO [Log partition=__consumer_offsets-1, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,853] INFO [Log partition=__consumer_offsets-1, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,853] INFO [Log partition=__consumer_offsets-1, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,857] INFO [Log partition=__consumer_offsets-8, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,857] INFO [Log partition=__consumer_offsets-8, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,859] INFO [Log partition=__consumer_offsets-8, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,859] INFO [Log partition=__consumer_offsets-8, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,863] INFO [Log partition=__consumer_offsets-39, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,863] INFO [Log partition=__consumer_offsets-39, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,865] INFO [Log partition=__consumer_offsets-39, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,865] INFO [Log partition=__consumer_offsets-39, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,869] INFO [Log partition=__consumer_offsets-37, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,869] INFO [Log partition=__consumer_offsets-37, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,871] INFO [Log partition=__consumer_offsets-37, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,871] INFO [Log partition=__consumer_offsets-37, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,874] INFO [Log partition=__consumer_offsets-30, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,874] INFO [Log partition=__consumer_offsets-30, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,876] INFO [Log partition=__consumer_offsets-30, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,877] INFO [Log partition=__consumer_offsets-30, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,881] INFO [Log partition=test-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,881] INFO [Log partition=test-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,891] INFO [ProducerStateManager partition=test-0] Writing producer snapshot at offset 13 (kafka.log.ProducerStateManager)
[2020-04-29 18:21:18,898] INFO [Log partition=test-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 13 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,899] INFO [ProducerStateManager partition=test-0] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/test-0/00000000000000000013.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:18,906] INFO [Log partition=test-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 13 in 27 ms (kafka.log.Log)
[2020-04-29 18:21:18,909] INFO [Log partition=__consumer_offsets-12, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,909] INFO [Log partition=__consumer_offsets-12, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,924] INFO [ProducerStateManager partition=__consumer_offsets-12] Writing producer snapshot at offset 430 (kafka.log.ProducerStateManager)
[2020-04-29 18:21:18,926] INFO [Log partition=__consumer_offsets-12, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 430 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,926] INFO [ProducerStateManager partition=__consumer_offsets-12] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/__consumer_offsets-12/00000000000000000430.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:18,927] INFO [Log partition=__consumer_offsets-12, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 430 in 20 ms (kafka.log.Log)
[2020-04-29 18:21:18,929] INFO [Log partition=__consumer_offsets-15, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,930] INFO [Log partition=__consumer_offsets-15, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,931] INFO [Log partition=__consumer_offsets-15, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,932] INFO [Log partition=__consumer_offsets-15, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,935] INFO [Log partition=__consumer_offsets-23, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,935] INFO [Log partition=__consumer_offsets-23, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,936] INFO [ProducerStateManager partition=__consumer_offsets-23] Writing producer snapshot at offset 3 (kafka.log.ProducerStateManager)
[2020-04-29 18:21:18,938] INFO [Log partition=__consumer_offsets-23, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 3 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,938] INFO [ProducerStateManager partition=__consumer_offsets-23] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/__consumer_offsets-23/00000000000000000003.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:18,938] INFO [Log partition=__consumer_offsets-23, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 3 in 5 ms (kafka.log.Log)
[2020-04-29 18:21:18,941] INFO [Log partition=__consumer_offsets-24, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,941] INFO [Log partition=__consumer_offsets-24, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,943] INFO [Log partition=__consumer_offsets-24, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,944] INFO [Log partition=__consumer_offsets-24, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,947] INFO [Log partition=test2-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,947] INFO [Log partition=test2-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,948] INFO [Log partition=test2-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,949] INFO [Log partition=test2-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:18,951] INFO [Log partition=__consumer_offsets-48, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,952] INFO [Log partition=__consumer_offsets-48, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,953] INFO [Log partition=__consumer_offsets-48, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,954] INFO [Log partition=__consumer_offsets-48, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,957] INFO [Log partition=__consumer_offsets-41, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,957] INFO [Log partition=__consumer_offsets-41, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,959] INFO [Log partition=__consumer_offsets-41, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,960] INFO [Log partition=__consumer_offsets-41, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 5 ms (kafka.log.Log)
[2020-04-29 18:21:18,962] INFO [Log partition=__consumer_offsets-46, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,962] INFO [Log partition=__consumer_offsets-46, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,964] INFO [Log partition=__consumer_offsets-46, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,965] INFO [Log partition=__consumer_offsets-46, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,968] INFO [Log partition=__consumer_offsets-25, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,968] INFO [Log partition=__consumer_offsets-25, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,970] INFO [Log partition=__consumer_offsets-25, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,970] INFO [Log partition=__consumer_offsets-25, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:18,973] INFO [Log partition=__consumer_offsets-22, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,973] INFO [Log partition=__consumer_offsets-22, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,975] INFO [Log partition=__consumer_offsets-22, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,975] INFO [Log partition=__consumer_offsets-22, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:18,978] INFO [Log partition=__consumer_offsets-14, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,978] INFO [Log partition=__consumer_offsets-14, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,980] INFO [Log partition=__consumer_offsets-14, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,980] INFO [Log partition=__consumer_offsets-14, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:18,983] INFO [Log partition=__consumer_offsets-13, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,984] INFO [Log partition=__consumer_offsets-13, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,985] INFO [ProducerStateManager partition=__consumer_offsets-13] Writing producer snapshot at offset 3 (kafka.log.ProducerStateManager)
[2020-04-29 18:21:18,986] INFO [Log partition=__consumer_offsets-13, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 3 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,987] INFO [ProducerStateManager partition=__consumer_offsets-13] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/__consumer_offsets-13/00000000000000000003.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:18,987] INFO [Log partition=__consumer_offsets-13, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 3 in 5 ms (kafka.log.Log)
[2020-04-29 18:21:18,990] INFO [Log partition=__consumer_offsets-47, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,990] INFO [Log partition=__consumer_offsets-47, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,992] INFO [Log partition=__consumer_offsets-47, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,992] INFO [Log partition=__consumer_offsets-47, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:18,995] INFO [Log partition=__consumer_offsets-40, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,995] INFO [Log partition=__consumer_offsets-40, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,997] INFO [Log partition=__consumer_offsets-40, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:18,997] INFO [Log partition=__consumer_offsets-40, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:18,999] INFO [Log partition=__consumer_offsets-49, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:18,999] INFO [Log partition=__consumer_offsets-49, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,001] INFO [Log partition=__consumer_offsets-49, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,002] INFO [Log partition=__consumer_offsets-49, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,004] INFO [Log partition=__consumer_offsets-35, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,004] INFO [Log partition=__consumer_offsets-35, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,006] INFO [Log partition=__consumer_offsets-35, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,007] INFO [Log partition=__consumer_offsets-35, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,009] INFO [Log partition=__consumer_offsets-32, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,009] INFO [Log partition=__consumer_offsets-32, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,010] INFO [ProducerStateManager partition=__consumer_offsets-32] Writing producer snapshot at offset 3 (kafka.log.ProducerStateManager)
[2020-04-29 18:21:19,012] INFO [Log partition=__consumer_offsets-32, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 3 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,012] INFO [ProducerStateManager partition=__consumer_offsets-32] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/__consumer_offsets-32/00000000000000000003.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:19,012] INFO [Log partition=__consumer_offsets-32, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 3 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,015] INFO [Log partition=__consumer_offsets-4, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,015] INFO [Log partition=__consumer_offsets-4, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,016] INFO [Log partition=__consumer_offsets-4, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,017] INFO [Log partition=__consumer_offsets-4, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,019] INFO [Log partition=__consumer_offsets-3, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,019] INFO [Log partition=__consumer_offsets-3, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,021] INFO [Log partition=__consumer_offsets-3, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,022] INFO [Log partition=__consumer_offsets-3, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,024] INFO [Log partition=__consumer_offsets-33, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,024] INFO [Log partition=__consumer_offsets-33, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,026] INFO [Log partition=__consumer_offsets-33, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,027] INFO [Log partition=__consumer_offsets-33, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,028] INFO [Log partition=__consumer_offsets-34, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,029] INFO [Log partition=__consumer_offsets-34, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,030] INFO [Log partition=__consumer_offsets-34, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,031] INFO [Log partition=__consumer_offsets-34, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:19,033] INFO [Log partition=__consumer_offsets-2, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,033] INFO [Log partition=__consumer_offsets-2, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,039] INFO [Log partition=__consumer_offsets-2, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,040] INFO [Log partition=__consumer_offsets-2, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 8 ms (kafka.log.Log)
[2020-04-29 18:21:19,042] INFO [Log partition=__consumer_offsets-5, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,042] INFO [Log partition=__consumer_offsets-5, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,044] INFO [Log partition=__consumer_offsets-5, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,044] INFO [Log partition=__consumer_offsets-5, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:19,046] INFO [Log partition=jerome-1, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,046] INFO [Log partition=jerome-1, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,048] INFO [Log partition=jerome-1, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,048] INFO [Log partition=jerome-1, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:19,050] INFO [Log partition=__consumer_offsets-45, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,050] INFO [Log partition=__consumer_offsets-45, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,052] INFO [Log partition=__consumer_offsets-45, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,052] INFO [Log partition=__consumer_offsets-45, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:19,054] INFO [Log partition=__consumer_offsets-42, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,055] INFO [Log partition=__consumer_offsets-42, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,056] INFO [Log partition=__consumer_offsets-42, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,057] INFO [Log partition=__consumer_offsets-42, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,059] INFO [Log partition=__consumer_offsets-29, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,059] INFO [Log partition=__consumer_offsets-29, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,060] INFO [Log partition=__consumer_offsets-29, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,061] INFO [Log partition=__consumer_offsets-29, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:19,063] INFO [Log partition=__consumer_offsets-16, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,063] INFO [Log partition=__consumer_offsets-16, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,065] INFO [Log partition=__consumer_offsets-16, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,065] INFO [Log partition=__consumer_offsets-16, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:19,067] INFO [Log partition=topic-demo-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,067] INFO [Log partition=topic-demo-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,069] INFO [ProducerStateManager partition=topic-demo-0] Writing producer snapshot at offset 20 (kafka.log.ProducerStateManager)
[2020-04-29 18:21:19,070] INFO [Log partition=topic-demo-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 20 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,071] INFO [ProducerStateManager partition=topic-demo-0] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/topic-demo-0/00000000000000000020.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:19,071] INFO [Log partition=topic-demo-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 20 in 5 ms (kafka.log.Log)
[2020-04-29 18:21:19,073] INFO [Log partition=__consumer_offsets-11, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,073] INFO [Log partition=__consumer_offsets-11, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,075] INFO [Log partition=__consumer_offsets-11, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,075] INFO [Log partition=__consumer_offsets-11, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:19,077] INFO [Log partition=__consumer_offsets-18, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,077] INFO [Log partition=__consumer_offsets-18, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,080] INFO [Log partition=__consumer_offsets-18, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,080] INFO [Log partition=__consumer_offsets-18, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,083] INFO [Log partition=__consumer_offsets-27, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,083] INFO [Log partition=__consumer_offsets-27, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,084] INFO [ProducerStateManager partition=__consumer_offsets-27] Writing producer snapshot at offset 3 (kafka.log.ProducerStateManager)
[2020-04-29 18:21:19,085] INFO [Log partition=__consumer_offsets-27, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 3 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,086] INFO [ProducerStateManager partition=__consumer_offsets-27] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/__consumer_offsets-27/00000000000000000003.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:19,086] INFO [Log partition=__consumer_offsets-27, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 3 in 5 ms (kafka.log.Log)
[2020-04-29 18:21:19,088] INFO [Log partition=__consumer_offsets-20, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,088] INFO [Log partition=__consumer_offsets-20, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,090] INFO [Log partition=__consumer_offsets-20, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,090] INFO [Log partition=__consumer_offsets-20, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:19,092] INFO [Log partition=__consumer_offsets-43, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,092] INFO [Log partition=__consumer_offsets-43, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,094] INFO [Log partition=__consumer_offsets-43, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,094] INFO [Log partition=__consumer_offsets-43, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:19,096] INFO [Log partition=__consumer_offsets-44, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,096] INFO [Log partition=__consumer_offsets-44, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,098] INFO [Log partition=__consumer_offsets-44, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,098] INFO [Log partition=__consumer_offsets-44, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:19,100] INFO [Log partition=jerome-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,100] INFO [Log partition=jerome-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,102] INFO [Log partition=jerome-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,103] INFO [Log partition=jerome-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,104] INFO [Log partition=__consumer_offsets-21, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,104] INFO [Log partition=__consumer_offsets-21, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,106] INFO [Log partition=__consumer_offsets-21, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,107] INFO [Log partition=__consumer_offsets-21, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,108] INFO [Log partition=__consumer_offsets-19, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,108] INFO [Log partition=__consumer_offsets-19, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,110] INFO [Log partition=__consumer_offsets-19, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,111] INFO [Log partition=__consumer_offsets-19, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,112] INFO [Log partition=__consumer_offsets-26, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,112] INFO [Log partition=__consumer_offsets-26, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,114] INFO [Log partition=__consumer_offsets-26, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,115] INFO [Log partition=__consumer_offsets-26, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,116] INFO [Log partition=__consumer_offsets-10, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,116] INFO [Log partition=__consumer_offsets-10, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,118] INFO [Log partition=__consumer_offsets-10, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,119] INFO [Log partition=__consumer_offsets-10, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,120] INFO [Log partition=__consumer_offsets-28, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,120] INFO [Log partition=__consumer_offsets-28, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,122] INFO [Log partition=__consumer_offsets-28, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,122] INFO [Log partition=__consumer_offsets-28, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:19,124] INFO [Log partition=__consumer_offsets-17, dir=/usr/local/etc/kafka/tmp/kafka-logs] Recovering unflushed segment 0 (kafka.log.Log)
[2020-04-29 18:21:19,124] INFO [Log partition=__consumer_offsets-17, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,126] INFO [Log partition=__consumer_offsets-17, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:19,127] INFO [Log partition=__consumer_offsets-17, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:19,128] INFO Logs loading complete in 407 ms. (kafka.log.LogManager)
[2020-04-29 18:21:19,137] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2020-04-29 18:21:19,138] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2020-04-29 18:21:19,399] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2020-04-29 18:21:19,420] INFO [SocketServer brokerId=0] Created data-plane acceptor and processors for endpoint : EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2020-04-29 18:21:19,421] INFO [SocketServer brokerId=0] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2020-04-29 18:21:19,437] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,437] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,437] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,438] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,447] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2020-04-29 18:21:19,492] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2020-04-29 18:21:19,502] ERROR Error while creating ephemeral at /brokers/ids/0, node already exists and owner '72057616369582879' does not match current session '72057616369582880' (kafka.zk.KafkaZkClient$CheckedEphemeral)
[2020-04-29 18:21:19,508] ERROR [KafkaServer id=0] Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists
at org.apache.zookeeper.KeeperException.create(KeeperException.java:126)
at kafka.zk.KafkaZkClient$CheckedEphemeral.getAfterNodeExists(KafkaZkClient.scala:1815)
at kafka.zk.KafkaZkClient$CheckedEphemeral.create(KafkaZkClient.scala:1753)
at kafka.zk.KafkaZkClient.checkedEphemeralCreate(KafkaZkClient.scala:1720)
at kafka.zk.KafkaZkClient.registerBroker(KafkaZkClient.scala:93)
at kafka.server.KafkaServer.startup(KafkaServer.scala:270)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:44)
at kafka.Kafka$.main(Kafka.scala:84)
at kafka.Kafka.main(Kafka.scala)
[2020-04-29 18:21:19,509] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
[2020-04-29 18:21:19,510] INFO [SocketServer brokerId=0] Stopping socket server request processors (kafka.network.SocketServer)
[2020-04-29 18:21:19,514] INFO [SocketServer brokerId=0] Stopped socket server request processors (kafka.network.SocketServer)
[2020-04-29 18:21:19,516] INFO [ReplicaManager broker=0] Shutting down (kafka.server.ReplicaManager)
[2020-04-29 18:21:19,517] INFO [LogDirFailureHandler]: Shutting down (kafka.server.ReplicaManager$LogDirFailureHandler)
[2020-04-29 18:21:19,517] INFO [LogDirFailureHandler]: Stopped (kafka.server.ReplicaManager$LogDirFailureHandler)
[2020-04-29 18:21:19,517] INFO [LogDirFailureHandler]: Shutdown completed (kafka.server.ReplicaManager$LogDirFailureHandler)
[2020-04-29 18:21:19,517] INFO [ReplicaFetcherManager on broker 0] shutting down (kafka.server.ReplicaFetcherManager)
[2020-04-29 18:21:19,518] INFO [ReplicaFetcherManager on broker 0] shutdown completed (kafka.server.ReplicaFetcherManager)
[2020-04-29 18:21:19,519] INFO [ReplicaAlterLogDirsManager on broker 0] shutting down (kafka.server.ReplicaAlterLogDirsManager)
[2020-04-29 18:21:19,519] INFO [ReplicaAlterLogDirsManager on broker 0] shutdown completed (kafka.server.ReplicaAlterLogDirsManager)
[2020-04-29 18:21:19,519] INFO [ExpirationReaper-0-Fetch]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,637] INFO [ExpirationReaper-0-Fetch]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,637] INFO [ExpirationReaper-0-Fetch]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,638] INFO [ExpirationReaper-0-Produce]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,911] INFO [ExpirationReaper-0-Produce]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,911] INFO [ExpirationReaper-0-Produce]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,912] INFO [ExpirationReaper-0-DeleteRecords]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,912] INFO [ExpirationReaper-0-DeleteRecords]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,912] INFO [ExpirationReaper-0-DeleteRecords]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:19,913] INFO [ExpirationReaper-0-ElectLeader]: Shutting down (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:20,187] INFO [ExpirationReaper-0-ElectLeader]: Stopped (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:20,187] INFO [ExpirationReaper-0-ElectLeader]: Shutdown completed (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:20,194] INFO [ReplicaManager broker=0] Shut down completely (kafka.server.ReplicaManager)
[2020-04-29 18:21:20,195] INFO Shutting down. (kafka.log.LogManager)
[2020-04-29 18:21:20,262] INFO Shutdown complete. (kafka.log.LogManager)
[2020-04-29 18:21:20,263] INFO [ZooKeeperClient Kafka server] Closing. (kafka.zookeeper.ZooKeeperClient)
[2020-04-29 18:21:20,441] INFO Session: 0x100000533120320 closed (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:20,441] INFO EventThread shut down for session: 0x100000533120320 (org.apache.zookeeper.ClientCnxn)
[2020-04-29 18:21:20,442] INFO [ZooKeeperClient Kafka server] Closed. (kafka.zookeeper.ZooKeeperClient)
[2020-04-29 18:21:20,442] INFO [ThrottledChannelReaper-Fetch]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:20,714] INFO [ThrottledChannelReaper-Fetch]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:20,714] INFO [ThrottledChannelReaper-Fetch]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:20,714] INFO [ThrottledChannelReaper-Produce]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:21,714] INFO [ThrottledChannelReaper-Produce]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:21,714] INFO [ThrottledChannelReaper-Produce]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:21,714] INFO [ThrottledChannelReaper-Request]: Shutting down (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:22,786] INFO [ThrottledChannelReaper-Request]: Stopped (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:22,786] INFO [ThrottledChannelReaper-Request]: Shutdown completed (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:22,788] INFO [SocketServer brokerId=0] Shutting down socket server (kafka.network.SocketServer)
[2020-04-29 18:21:22,814] INFO [SocketServer brokerId=0] Shutdown completed (kafka.network.SocketServer)
[2020-04-29 18:21:22,818] INFO [KafkaServer id=0] shut down completed (kafka.server.KafkaServer)
[2020-04-29 18:21:22,819] ERROR Exiting Kafka. (kafka.server.KafkaServerStartable)
[2020-04-29 18:21:22,825] INFO [KafkaServer id=0] shutting down (kafka.server.KafkaServer)
[2020-04-29 18:21:28,872] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2020-04-29 18:21:29,333] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler)
[2020-04-29 18:21:29,334] INFO starting (kafka.server.KafkaServer)
[2020-04-29 18:21:29,335] INFO Connecting to zookeeper on localhost:2181 (kafka.server.KafkaServer)
[2020-04-29 18:21:29,350] INFO [ZooKeeperClient Kafka server] Initializing a new session to localhost:2181. (kafka.zookeeper.ZooKeeperClient)
[2020-04-29 18:21:29,355] INFO Client environment:zookeeper.version=3.5.6-c11b7e26bc554b8523dc929761dd28808913f091, built on 10/08/2019 20:18 GMT (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,355] INFO Client environment:host.name=localhost (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,355] INFO Client environment:java.version=1.8.0_222 (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,355] INFO Client environment:java.vendor=AdoptOpenJDK (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,355] INFO Client environment:java.home=/Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:java.class.path=/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/activation-1.1.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/aopalliance-repackaged-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/argparse4j-0.7.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/audience-annotations-0.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/commons-cli-1.4.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/commons-lang3-3.8.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-api-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-basic-auth-extension-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-file-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-json-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-mirror-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-mirror-client-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-runtime-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/connect-transforms-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/guava-20.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/hk2-api-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/hk2-locator-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/hk2-utils-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-annotations-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-core-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-databind-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-dataformat-csv-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-datatype-jdk8-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-jaxrs-base-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-jaxrs-json-provider-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-module-jaxb-annotations-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-module-paranamer-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jackson-module-scala_2.12-2.10.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.activation-api-1.2.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.annotation-api-1.3.4.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.inject-2.5.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.ws.rs-api-2.1.5.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/javassist-3.22.0-CR2.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/javax.servlet-api-3.1.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/javax.ws.rs-api-2.1.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jaxb-api-2.3.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-client-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-common-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-container-servlet-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-container-servlet-core-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-hk2-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-media-jaxb-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jersey-server-2.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-client-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-continuation-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-http-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-io-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-security-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-server-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-servlet-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-servlets-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jetty-util-9.4.20.v20190813.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/jopt-simple-5.0.4.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-clients-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-log4j-appender-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-streams-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-streams-examples-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-streams-scala_2.12-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-streams-test-utils-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka-tools-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka_2.12-2.4.0-sources.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/kafka_2.12-2.4.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/log4j-1.2.17.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/lz4-java-1.6.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/maven-artifact-3.6.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/metrics-core-2.2.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-buffer-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-codec-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-common-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-handler-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-resolver-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-transport-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-transport-native-epoll-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/netty-transport-native-unix-common-4.1.42.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/osgi-resource-locator-1.0.1.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/paranamer-2.8.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/plexus-utils-3.2.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/reflections-0.9.11.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/rocksdbjni-5.18.3.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-collection-compat_2.12-2.1.2.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-java8-compat_2.12-0.9.0.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-library-2.12.10.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-logging_2.12-3.9.2.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/scala-reflect-2.12.10.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/slf4j-api-1.7.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/slf4j-log4j12-1.7.28.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/snappy-java-1.1.7.3.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/validation-api-2.0.1.Final.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/zookeeper-3.5.6.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/zookeeper-jute-3.5.6.jar:/usr/local/Cellar/kafka/2.4.0/libexec/bin/../libs/zstd-jni-1.4.3-1.jar (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:java.library.path=/Users/yangweijie/Library/Java/Extensions:/Library/Java/Extensions:/Network/Library/Java/Extensions:/System/Library/Java/Extensions:/usr/lib/java:. (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:java.io.tmpdir=/var/folders/cp/p4lwr_6n66s065dhcw80dzd80000gn/T/ (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:os.name=Mac OS X (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:os.arch=x86_64 (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:os.version=10.15.4 (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:user.name=yangweijie (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:user.home=/Users/yangweijie (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:user.dir=/usr/local (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:os.memory.free=978MB (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,356] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,358] INFO Initiating client connection, connectString=localhost:2181 sessionTimeout=6000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@7722c3c3 (org.apache.zookeeper.ZooKeeper)
[2020-04-29 18:21:29,362] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util)
[2020-04-29 18:21:29,367] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket)
[2020-04-29 18:21:29,372] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn)
[2020-04-29 18:21:29,374] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2020-04-29 18:21:29,377] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
[2020-04-29 18:21:29,387] INFO Socket connection established, initiating session, client: /0:0:0:0:0:0:0:1:56065, server: localhost/0:0:0:0:0:0:0:1:2181 (org.apache.zookeeper.ClientCnxn)
[2020-04-29 18:21:29,391] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x100000533120321, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2020-04-29 18:21:29,394] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient)
[2020-04-29 18:21:29,570] INFO Cluster ID = I5sBHS6MSMG4MmUddyviFQ (kafka.server.KafkaServer)
[2020-04-29 18:21:29,625] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.4-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/etc/kafka/tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.4-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
security.providers = null
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2020-04-29 18:21:29,633] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = null
advertised.port = null
alter.config.policy.class.name = null
alter.log.dirs.replication.quota.window.num = 11
alter.log.dirs.replication.quota.window.size.seconds = 1
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 0
broker.id.generation.enable = true
broker.rack = null
client.quota.callback.class = null
compression.type = producer
connection.failed.authentication.delay.ms = 100
connections.max.idle.ms = 600000
connections.max.reauth.ms = 0
control.plane.listener.name = null
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 1
delegation.token.expiry.check.interval.ms = 3600000
delegation.token.expiry.time.ms = 86400000
delegation.token.master.key = null
delegation.token.max.lifetime.ms = 604800000
delete.records.purgatory.purge.interval.requests = 1
delete.topic.enable = true
fetch.purgatory.purge.interval.requests = 1000
group.initial.rebalance.delay.ms = 0
group.max.session.timeout.ms = 1800000
group.max.size = 2147483647
group.min.session.timeout.ms = 6000
host.name =
inter.broker.listener.name = null
inter.broker.protocol.version = 2.4-IV1
kafka.metrics.polling.interval.secs = 10
kafka.metrics.reporters = []
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
listeners = PLAINTEXT://:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.max.compaction.lag.ms = 9223372036854775807
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /usr/local/etc/kafka/tmp/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.flush.start.offset.checkpoint.interval.ms = 60000
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.downconversion.enable = true
log.message.format.version = 2.4-IV1
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections = 2147483647
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
max.incremental.fetch.session.cache.slots = 1000
message.max.bytes = 1000012
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 3
num.partitions = 1
num.recovery.threads.per.data.dir = 1
num.replica.alter.log.dirs.threads = null
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 10080
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 1
offsets.topic.segment.bytes = 104857600
password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
password.encoder.iterations = 4096
password.encoder.key.length = 128
password.encoder.keyfactory.algorithm = null
password.encoder.old.secret = null
password.encoder.secret = null
port = 9092
principal.builder.class = null
producer.purgatory.purge.interval.requests = 1000
queued.max.request.bytes = -1
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 1048576
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.selector.class = null
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.client.callback.handler.class = null
sasl.enabled.mechanisms = [GSSAPI]
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism.inter.broker.protocol = GSSAPI
sasl.server.callback.handler.class = null
security.inter.broker.protocol = PLAINTEXT
security.providers = null
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = []
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.principal.mapping.rules = DEFAULT
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
transaction.abort.timed.out.transaction.cleanup.interval.ms = 60000
transaction.max.timeout.ms = 900000
transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
transaction.state.log.load.buffer.size = 5242880
transaction.state.log.min.isr = 1
transaction.state.log.num.partitions = 50
transaction.state.log.replication.factor = 1
transaction.state.log.segment.bytes = 104857600
transactional.id.expiration.ms = 604800000
unclean.leader.election.enable = false
zookeeper.connect = localhost:2181
zookeeper.connection.timeout.ms = 6000
zookeeper.max.in.flight.requests = 10
zookeeper.session.timeout.ms = 6000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2020-04-29 18:21:29,654] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:29,654] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:29,655] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper)
[2020-04-29 18:21:29,686] INFO Loading logs. (kafka.log.LogManager)
[2020-04-29 18:21:29,744] INFO [Log partition=__consumer_offsets-9, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,752] INFO [Log partition=__consumer_offsets-9, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 43 ms (kafka.log.Log)
[2020-04-29 18:21:29,760] INFO [Log partition=__consumer_offsets-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,761] INFO [Log partition=__consumer_offsets-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,765] INFO [Log partition=__consumer_offsets-7, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,765] INFO [Log partition=__consumer_offsets-7, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,769] INFO [Log partition=__consumer_offsets-31, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,769] INFO [Log partition=__consumer_offsets-31, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,773] INFO [Log partition=__consumer_offsets-36, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,773] INFO [Log partition=__consumer_offsets-36, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,777] INFO [Log partition=__consumer_offsets-38, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,777] INFO [Log partition=__consumer_offsets-38, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,781] INFO [Log partition=__consumer_offsets-6, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,781] INFO [Log partition=__consumer_offsets-6, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,785] INFO [Log partition=__consumer_offsets-1, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,785] INFO [Log partition=__consumer_offsets-1, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,788] INFO [Log partition=__consumer_offsets-8, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,788] INFO [Log partition=__consumer_offsets-8, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,792] INFO [Log partition=__consumer_offsets-39, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,792] INFO [Log partition=__consumer_offsets-39, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,796] INFO [Log partition=__consumer_offsets-37, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,796] INFO [Log partition=__consumer_offsets-37, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,799] INFO [Log partition=__consumer_offsets-30, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,799] INFO [Log partition=__consumer_offsets-30, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,814] INFO [Log partition=test-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 13 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,818] INFO [ProducerStateManager partition=test-0] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/test-0/00000000000000000013.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:29,828] INFO [Log partition=test-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 13 in 27 ms (kafka.log.Log)
[2020-04-29 18:21:29,833] INFO [Log partition=__consumer_offsets-12, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 430 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,833] INFO [ProducerStateManager partition=__consumer_offsets-12] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/__consumer_offsets-12/00000000000000000430.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:29,834] INFO [Log partition=__consumer_offsets-12, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 430 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:29,837] INFO [Log partition=__consumer_offsets-15, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,837] INFO [Log partition=__consumer_offsets-15, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,841] INFO [Log partition=__consumer_offsets-23, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 3 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,842] INFO [ProducerStateManager partition=__consumer_offsets-23] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/__consumer_offsets-23/00000000000000000003.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:29,842] INFO [Log partition=__consumer_offsets-23, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 3 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:29,845] INFO [Log partition=__consumer_offsets-24, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,846] INFO [Log partition=__consumer_offsets-24, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,849] INFO [Log partition=test2-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,849] INFO [Log partition=test2-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,852] INFO [Log partition=__consumer_offsets-48, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,852] INFO [Log partition=__consumer_offsets-48, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,855] INFO [Log partition=__consumer_offsets-41, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,855] INFO [Log partition=__consumer_offsets-41, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,858] INFO [Log partition=__consumer_offsets-46, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,859] INFO [Log partition=__consumer_offsets-46, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,862] INFO [Log partition=__consumer_offsets-25, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,862] INFO [Log partition=__consumer_offsets-25, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,865] INFO [Log partition=__consumer_offsets-22, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,865] INFO [Log partition=__consumer_offsets-22, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,868] INFO [Log partition=__consumer_offsets-14, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,868] INFO [Log partition=__consumer_offsets-14, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,872] INFO [Log partition=__consumer_offsets-13, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 3 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,873] INFO [ProducerStateManager partition=__consumer_offsets-13] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/__consumer_offsets-13/00000000000000000003.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:29,873] INFO [Log partition=__consumer_offsets-13, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 3 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:29,876] INFO [Log partition=__consumer_offsets-47, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,876] INFO [Log partition=__consumer_offsets-47, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,879] INFO [Log partition=__consumer_offsets-40, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,879] INFO [Log partition=__consumer_offsets-40, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,882] INFO [Log partition=__consumer_offsets-49, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,882] INFO [Log partition=__consumer_offsets-49, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,885] INFO [Log partition=__consumer_offsets-35, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,885] INFO [Log partition=__consumer_offsets-35, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,889] INFO [Log partition=__consumer_offsets-32, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 3 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,890] INFO [ProducerStateManager partition=__consumer_offsets-32] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/__consumer_offsets-32/00000000000000000003.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:29,890] INFO [Log partition=__consumer_offsets-32, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 3 in 4 ms (kafka.log.Log)
[2020-04-29 18:21:29,892] INFO [Log partition=__consumer_offsets-4, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,892] INFO [Log partition=__consumer_offsets-4, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,895] INFO [Log partition=__consumer_offsets-3, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,895] INFO [Log partition=__consumer_offsets-3, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,898] INFO [Log partition=__consumer_offsets-33, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,898] INFO [Log partition=__consumer_offsets-33, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,901] INFO [Log partition=__consumer_offsets-34, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,901] INFO [Log partition=__consumer_offsets-34, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,903] INFO [Log partition=__consumer_offsets-2, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,904] INFO [Log partition=__consumer_offsets-2, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,906] INFO [Log partition=__consumer_offsets-5, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,906] INFO [Log partition=__consumer_offsets-5, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,909] INFO [Log partition=jerome-1, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,909] INFO [Log partition=jerome-1, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,911] INFO [Log partition=__consumer_offsets-45, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,911] INFO [Log partition=__consumer_offsets-45, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,914] INFO [Log partition=__consumer_offsets-42, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,914] INFO [Log partition=__consumer_offsets-42, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,916] INFO [Log partition=__consumer_offsets-29, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,917] INFO [Log partition=__consumer_offsets-29, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,919] INFO [Log partition=__consumer_offsets-16, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,919] INFO [Log partition=__consumer_offsets-16, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,923] INFO [Log partition=topic-demo-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 20 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,923] INFO [ProducerStateManager partition=topic-demo-0] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/topic-demo-0/00000000000000000020.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:29,923] INFO [Log partition=topic-demo-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 20 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:29,926] INFO [Log partition=__consumer_offsets-11, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,926] INFO [Log partition=__consumer_offsets-11, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,928] INFO [Log partition=__consumer_offsets-18, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,928] INFO [Log partition=__consumer_offsets-18, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,931] INFO [Log partition=__consumer_offsets-27, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 3 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,932] INFO [ProducerStateManager partition=__consumer_offsets-27] Loading producer state from snapshot file '/usr/local/etc/kafka/tmp/kafka-logs/__consumer_offsets-27/00000000000000000003.snapshot' (kafka.log.ProducerStateManager)
[2020-04-29 18:21:29,932] INFO [Log partition=__consumer_offsets-27, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 3 in 3 ms (kafka.log.Log)
[2020-04-29 18:21:29,934] INFO [Log partition=__consumer_offsets-20, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,934] INFO [Log partition=__consumer_offsets-20, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,937] INFO [Log partition=__consumer_offsets-43, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,937] INFO [Log partition=__consumer_offsets-43, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,939] INFO [Log partition=__consumer_offsets-44, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,939] INFO [Log partition=__consumer_offsets-44, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,941] INFO [Log partition=jerome-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,942] INFO [Log partition=jerome-0, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,944] INFO [Log partition=__consumer_offsets-21, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,944] INFO [Log partition=__consumer_offsets-21, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 2 ms (kafka.log.Log)
[2020-04-29 18:21:29,946] INFO [Log partition=__consumer_offsets-19, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,946] INFO [Log partition=__consumer_offsets-19, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,948] INFO [Log partition=__consumer_offsets-26, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,948] INFO [Log partition=__consumer_offsets-26, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,950] INFO [Log partition=__consumer_offsets-10, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,950] INFO [Log partition=__consumer_offsets-10, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,952] INFO [Log partition=__consumer_offsets-28, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,952] INFO [Log partition=__consumer_offsets-28, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,954] INFO [Log partition=__consumer_offsets-17, dir=/usr/local/etc/kafka/tmp/kafka-logs] Loading producer state till offset 0 with message format version 2 (kafka.log.Log)
[2020-04-29 18:21:29,954] INFO [Log partition=__consumer_offsets-17, dir=/usr/local/etc/kafka/tmp/kafka-logs] Completed load of log with 1 segments, log start offset 0 and log end offset 0 in 1 ms (kafka.log.Log)
[2020-04-29 18:21:29,956] INFO Logs loading complete in 270 ms. (kafka.log.LogManager)
[2020-04-29 18:21:29,964] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager)
[2020-04-29 18:21:29,965] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager)
[2020-04-29 18:21:30,221] INFO Awaiting socket connections on 0.0.0.0:9092. (kafka.network.Acceptor)
[2020-04-29 18:21:30,243] INFO [SocketServer brokerId=0] Created data-plane acceptor and processors for endpoint : EndPoint(null,9092,ListenerName(PLAINTEXT),PLAINTEXT) (kafka.network.SocketServer)
[2020-04-29 18:21:30,244] INFO [SocketServer brokerId=0] Started 1 acceptor threads for data-plane (kafka.network.SocketServer)
[2020-04-29 18:21:30,259] INFO [ExpirationReaper-0-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:30,260] INFO [ExpirationReaper-0-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:30,260] INFO [ExpirationReaper-0-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:30,260] INFO [ExpirationReaper-0-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:30,270] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler)
[2020-04-29 18:21:30,310] INFO Creating /brokers/ids/0 (is it secure? false) (kafka.zk.KafkaZkClient)
[2020-04-29 18:21:30,320] INFO Stat of the created znode at /brokers/ids/0 is: 12126,12126,1588155690317,1588155690317,1,0,0,72057616369582881,188,0,12126
(kafka.zk.KafkaZkClient)
[2020-04-29 18:21:30,321] INFO Registered broker 0 at path /brokers/ids/0 with addresses: ArrayBuffer(EndPoint(localhost,9092,ListenerName(PLAINTEXT),PLAINTEXT)), czxid (broker epoch): 12126 (kafka.zk.KafkaZkClient)
[2020-04-29 18:21:30,366] INFO [ExpirationReaper-0-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:30,368] INFO [ExpirationReaper-0-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:30,369] INFO [ExpirationReaper-0-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:30,402] INFO [GroupCoordinator 0]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2020-04-29 18:21:30,402] INFO [GroupCoordinator 0]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2020-04-29 18:21:30,407] INFO [GroupMetadataManager brokerId=0] Removed 0 expired offsets in 5 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,416] INFO [ProducerId Manager 0]: Acquired new producerId block (brokerId:0,blockStartProducerId:6000,blockEndProducerId:6999) by writing to Zk with path version 7 (kafka.coordinator.transaction.ProducerIdManager)
[2020-04-29 18:21:30,434] INFO [TransactionCoordinator id=0] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-04-29 18:21:30,441] INFO [Transaction Marker Channel Manager 0]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2020-04-29 18:21:30,442] INFO [TransactionCoordinator id=0] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2020-04-29 18:21:30,462] INFO [ExpirationReaper-0-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2020-04-29 18:21:30,489] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2020-04-29 18:21:30,508] INFO [SocketServer brokerId=0] Started data-plane processors for 1 acceptors (kafka.network.SocketServer)
[2020-04-29 18:21:30,512] INFO Kafka version: 2.4.0 (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-29 18:21:30,512] INFO Kafka commitId: 77a89fcf8d7fa018 (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-29 18:21:30,512] INFO Kafka startTimeMs: 1588155690509 (org.apache.kafka.common.utils.AppInfoParser)
[2020-04-29 18:21:30,513] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)
[2020-04-29 18:21:30,585] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions Set(__consumer_offsets-22, __consumer_offsets-30, __consumer_offsets-8, __consumer_offsets-21, __consumer_offsets-4, __consumer_offsets-27, __consumer_offsets-7, __consumer_offsets-9, __consumer_offsets-46, __consumer_offsets-25, __consumer_offsets-35, __consumer_offsets-41, __consumer_offsets-33, __consumer_offsets-23, __consumer_offsets-49, __consumer_offsets-47, __consumer_offsets-16, test-0, __consumer_offsets-28, __consumer_offsets-31, __consumer_offsets-36, __consumer_offsets-42, __consumer_offsets-3, __consumer_offsets-18, __consumer_offsets-37, __consumer_offsets-15, __consumer_offsets-24, __consumer_offsets-38, __consumer_offsets-17, jerome-1, __consumer_offsets-48, __consumer_offsets-19, jerome-0, __consumer_offsets-11, __consumer_offsets-13, __consumer_offsets-2, __consumer_offsets-43, __consumer_offsets-6, __consumer_offsets-14, __consumer_offsets-20, __consumer_offsets-0, __consumer_offsets-44, __consumer_offsets-39, __consumer_offsets-12, topic-demo-0, __consumer_offsets-45, __consumer_offsets-1, __consumer_offsets-5, __consumer_offsets-26, __consumer_offsets-29, __consumer_offsets-34, __consumer_offsets-10, __consumer_offsets-32, __consumer_offsets-40) (kafka.server.ReplicaFetcherManager)
[2020-04-29 18:21:30,607] INFO [Partition __consumer_offsets-0 broker=0] Log loaded for partition __consumer_offsets-0 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,608] INFO [Partition __consumer_offsets-0 broker=0] __consumer_offsets-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,617] INFO [Partition __consumer_offsets-29 broker=0] Log loaded for partition __consumer_offsets-29 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,617] INFO [Partition __consumer_offsets-29 broker=0] __consumer_offsets-29 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,619] INFO [Partition __consumer_offsets-48 broker=0] Log loaded for partition __consumer_offsets-48 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,619] INFO [Partition __consumer_offsets-48 broker=0] __consumer_offsets-48 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,621] INFO [Partition __consumer_offsets-10 broker=0] Log loaded for partition __consumer_offsets-10 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,621] INFO [Partition __consumer_offsets-10 broker=0] __consumer_offsets-10 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,623] INFO [Partition __consumer_offsets-45 broker=0] Log loaded for partition __consumer_offsets-45 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,623] INFO [Partition __consumer_offsets-45 broker=0] __consumer_offsets-45 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,625] INFO [Partition __consumer_offsets-26 broker=0] Log loaded for partition __consumer_offsets-26 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,625] INFO [Partition __consumer_offsets-26 broker=0] __consumer_offsets-26 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,626] INFO [Partition __consumer_offsets-7 broker=0] Log loaded for partition __consumer_offsets-7 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,627] INFO [Partition __consumer_offsets-7 broker=0] __consumer_offsets-7 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,628] INFO [Partition __consumer_offsets-42 broker=0] Log loaded for partition __consumer_offsets-42 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,628] INFO [Partition __consumer_offsets-42 broker=0] __consumer_offsets-42 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,630] INFO [Partition __consumer_offsets-4 broker=0] Log loaded for partition __consumer_offsets-4 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,630] INFO [Partition __consumer_offsets-4 broker=0] __consumer_offsets-4 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,632] INFO [Partition __consumer_offsets-23 broker=0] Log loaded for partition __consumer_offsets-23 with initial high watermark 3 (kafka.cluster.Partition)
[2020-04-29 18:21:30,632] INFO [Partition __consumer_offsets-23 broker=0] __consumer_offsets-23 starts at Leader Epoch 0 from offset 3. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,633] INFO [Partition __consumer_offsets-1 broker=0] Log loaded for partition __consumer_offsets-1 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,633] INFO [Partition __consumer_offsets-1 broker=0] __consumer_offsets-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,634] INFO [Partition __consumer_offsets-20 broker=0] Log loaded for partition __consumer_offsets-20 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,634] INFO [Partition __consumer_offsets-20 broker=0] __consumer_offsets-20 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,636] INFO [Partition __consumer_offsets-39 broker=0] Log loaded for partition __consumer_offsets-39 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,636] INFO [Partition __consumer_offsets-39 broker=0] __consumer_offsets-39 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,638] INFO [Partition __consumer_offsets-17 broker=0] Log loaded for partition __consumer_offsets-17 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,638] INFO [Partition __consumer_offsets-17 broker=0] __consumer_offsets-17 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,640] INFO [Partition __consumer_offsets-36 broker=0] Log loaded for partition __consumer_offsets-36 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,640] INFO [Partition __consumer_offsets-36 broker=0] __consumer_offsets-36 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,641] INFO [Partition __consumer_offsets-14 broker=0] Log loaded for partition __consumer_offsets-14 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,642] INFO [Partition __consumer_offsets-14 broker=0] __consumer_offsets-14 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,643] INFO [Partition __consumer_offsets-33 broker=0] Log loaded for partition __consumer_offsets-33 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,643] INFO [Partition __consumer_offsets-33 broker=0] __consumer_offsets-33 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,645] INFO [Partition __consumer_offsets-49 broker=0] Log loaded for partition __consumer_offsets-49 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,645] INFO [Partition __consumer_offsets-49 broker=0] __consumer_offsets-49 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,647] INFO [Partition __consumer_offsets-11 broker=0] Log loaded for partition __consumer_offsets-11 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,647] INFO [Partition __consumer_offsets-11 broker=0] __consumer_offsets-11 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,649] INFO [Partition __consumer_offsets-30 broker=0] Log loaded for partition __consumer_offsets-30 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,649] INFO [Partition __consumer_offsets-30 broker=0] __consumer_offsets-30 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,650] INFO [Partition test-0 broker=0] Log loaded for partition test-0 with initial high watermark 13 (kafka.cluster.Partition)
[2020-04-29 18:21:30,650] INFO [Partition test-0 broker=0] test-0 starts at Leader Epoch 0 from offset 13. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,651] INFO [Partition __consumer_offsets-46 broker=0] Log loaded for partition __consumer_offsets-46 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,651] INFO [Partition __consumer_offsets-46 broker=0] __consumer_offsets-46 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,653] INFO [Partition __consumer_offsets-27 broker=0] Log loaded for partition __consumer_offsets-27 with initial high watermark 3 (kafka.cluster.Partition)
[2020-04-29 18:21:30,653] INFO [Partition __consumer_offsets-27 broker=0] __consumer_offsets-27 starts at Leader Epoch 0 from offset 3. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,654] INFO [Partition __consumer_offsets-8 broker=0] Log loaded for partition __consumer_offsets-8 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,654] INFO [Partition __consumer_offsets-8 broker=0] __consumer_offsets-8 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,656] INFO [Partition __consumer_offsets-24 broker=0] Log loaded for partition __consumer_offsets-24 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,656] INFO [Partition __consumer_offsets-24 broker=0] __consumer_offsets-24 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,658] INFO [Partition __consumer_offsets-43 broker=0] Log loaded for partition __consumer_offsets-43 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,658] INFO [Partition __consumer_offsets-43 broker=0] __consumer_offsets-43 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,659] INFO [Partition __consumer_offsets-5 broker=0] Log loaded for partition __consumer_offsets-5 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,659] INFO [Partition __consumer_offsets-5 broker=0] __consumer_offsets-5 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,661] INFO [Partition jerome-0 broker=0] Log loaded for partition jerome-0 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,661] INFO [Partition jerome-0 broker=0] jerome-0 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,662] INFO [Partition __consumer_offsets-21 broker=0] Log loaded for partition __consumer_offsets-21 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,662] INFO [Partition __consumer_offsets-21 broker=0] __consumer_offsets-21 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,664] INFO [Partition __consumer_offsets-40 broker=0] Log loaded for partition __consumer_offsets-40 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,664] INFO [Partition __consumer_offsets-40 broker=0] __consumer_offsets-40 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,666] INFO [Partition __consumer_offsets-2 broker=0] Log loaded for partition __consumer_offsets-2 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,666] INFO [Partition __consumer_offsets-2 broker=0] __consumer_offsets-2 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,667] INFO [Partition __consumer_offsets-37 broker=0] Log loaded for partition __consumer_offsets-37 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,667] INFO [Partition __consumer_offsets-37 broker=0] __consumer_offsets-37 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,669] INFO [Partition __consumer_offsets-18 broker=0] Log loaded for partition __consumer_offsets-18 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,669] INFO [Partition __consumer_offsets-18 broker=0] __consumer_offsets-18 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,670] INFO [Partition __consumer_offsets-34 broker=0] Log loaded for partition __consumer_offsets-34 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,670] INFO [Partition __consumer_offsets-34 broker=0] __consumer_offsets-34 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,672] INFO [Partition __consumer_offsets-15 broker=0] Log loaded for partition __consumer_offsets-15 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,672] INFO [Partition __consumer_offsets-15 broker=0] __consumer_offsets-15 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,674] INFO [Partition topic-demo-0 broker=0] Log loaded for partition topic-demo-0 with initial high watermark 20 (kafka.cluster.Partition)
[2020-04-29 18:21:30,674] INFO [Partition topic-demo-0 broker=0] topic-demo-0 starts at Leader Epoch 0 from offset 20. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,675] INFO [Partition __consumer_offsets-12 broker=0] Log loaded for partition __consumer_offsets-12 with initial high watermark 430 (kafka.cluster.Partition)
[2020-04-29 18:21:30,675] INFO [Partition __consumer_offsets-12 broker=0] __consumer_offsets-12 starts at Leader Epoch 0 from offset 430. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,676] INFO [Partition __consumer_offsets-31 broker=0] Log loaded for partition __consumer_offsets-31 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,676] INFO [Partition __consumer_offsets-31 broker=0] __consumer_offsets-31 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,677] INFO [Partition jerome-1 broker=0] Log loaded for partition jerome-1 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,677] INFO [Partition jerome-1 broker=0] jerome-1 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,679] INFO [Partition __consumer_offsets-9 broker=0] Log loaded for partition __consumer_offsets-9 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,679] INFO [Partition __consumer_offsets-9 broker=0] __consumer_offsets-9 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,680] INFO [Partition __consumer_offsets-47 broker=0] Log loaded for partition __consumer_offsets-47 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,680] INFO [Partition __consumer_offsets-47 broker=0] __consumer_offsets-47 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,682] INFO [Partition __consumer_offsets-19 broker=0] Log loaded for partition __consumer_offsets-19 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,682] INFO [Partition __consumer_offsets-19 broker=0] __consumer_offsets-19 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,683] INFO [Partition __consumer_offsets-28 broker=0] Log loaded for partition __consumer_offsets-28 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,683] INFO [Partition __consumer_offsets-28 broker=0] __consumer_offsets-28 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,684] INFO [Partition __consumer_offsets-38 broker=0] Log loaded for partition __consumer_offsets-38 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,685] INFO [Partition __consumer_offsets-38 broker=0] __consumer_offsets-38 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,686] INFO [Partition __consumer_offsets-35 broker=0] Log loaded for partition __consumer_offsets-35 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,686] INFO [Partition __consumer_offsets-35 broker=0] __consumer_offsets-35 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,687] INFO [Partition __consumer_offsets-6 broker=0] Log loaded for partition __consumer_offsets-6 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,688] INFO [Partition __consumer_offsets-6 broker=0] __consumer_offsets-6 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,689] INFO [Partition __consumer_offsets-44 broker=0] Log loaded for partition __consumer_offsets-44 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,689] INFO [Partition __consumer_offsets-44 broker=0] __consumer_offsets-44 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,690] INFO [Partition __consumer_offsets-25 broker=0] Log loaded for partition __consumer_offsets-25 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,690] INFO [Partition __consumer_offsets-25 broker=0] __consumer_offsets-25 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,692] INFO [Partition __consumer_offsets-16 broker=0] Log loaded for partition __consumer_offsets-16 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,692] INFO [Partition __consumer_offsets-16 broker=0] __consumer_offsets-16 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,693] INFO [Partition __consumer_offsets-22 broker=0] Log loaded for partition __consumer_offsets-22 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,693] INFO [Partition __consumer_offsets-22 broker=0] __consumer_offsets-22 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,694] INFO [Partition __consumer_offsets-41 broker=0] Log loaded for partition __consumer_offsets-41 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,694] INFO [Partition __consumer_offsets-41 broker=0] __consumer_offsets-41 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,696] INFO [Partition __consumer_offsets-32 broker=0] Log loaded for partition __consumer_offsets-32 with initial high watermark 3 (kafka.cluster.Partition)
[2020-04-29 18:21:30,696] INFO [Partition __consumer_offsets-32 broker=0] __consumer_offsets-32 starts at Leader Epoch 0 from offset 3. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,697] INFO [Partition __consumer_offsets-3 broker=0] Log loaded for partition __consumer_offsets-3 with initial high watermark 0 (kafka.cluster.Partition)
[2020-04-29 18:21:30,697] INFO [Partition __consumer_offsets-3 broker=0] __consumer_offsets-3 starts at Leader Epoch 0 from offset 0. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,698] INFO [Partition __consumer_offsets-13 broker=0] Log loaded for partition __consumer_offsets-13 with initial high watermark 3 (kafka.cluster.Partition)
[2020-04-29 18:21:30,698] INFO [Partition __consumer_offsets-13 broker=0] __consumer_offsets-13 starts at Leader Epoch 0 from offset 3. Previous Leader Epoch was: -1 (kafka.cluster.Partition)
[2020-04-29 18:21:30,705] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-22 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,705] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-25 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,705] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-28 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,705] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-31 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,705] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-34 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,705] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-37 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,705] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-40 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,705] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-43 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,705] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-46 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,705] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-49 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,705] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-41 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-44 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-47 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-1 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-7 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-13 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-19 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-2 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-5 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-8 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-11 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-14 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-17 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-20 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-23 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-26 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-29 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-32 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-35 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-38 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-0 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-3 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-6 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-9 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,706] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-12 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,707] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-15 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,707] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-18 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,707] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-21 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,707] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-24 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,707] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-27 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,707] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-30 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,707] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-33 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,707] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-36 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,707] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-39 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,707] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,707] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-45 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,707] INFO [GroupMetadataManager brokerId=0] Scheduling loading of offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,709] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-22 in 4 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,710] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-25 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,710] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-28 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,711] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-31 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,711] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-34 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,711] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-37 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,711] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-40 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,711] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-43 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,712] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-46 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,712] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-49 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,712] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-41 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,712] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-44 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,713] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-47 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,713] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-1 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,713] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-4 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,714] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-7 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,714] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-10 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,734] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-13 in 20 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,735] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-16 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,735] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-19 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,735] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-2 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,735] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-5 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,736] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-8 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,736] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-11 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,736] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-14 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,736] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-17 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,736] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-20 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,739] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-23 in 2 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,740] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-26 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,740] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-29 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,742] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-32 in 2 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-35 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-38 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-0 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-3 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-6 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,743] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-9 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,763] INFO [GroupCoordinator 0]: Loading group metadata for group.demo with generation 18 (kafka.coordinator.group.GroupCoordinator)
[2020-04-29 18:21:30,763] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-12 in 20 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,763] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-15 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,763] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-18 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,763] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-21 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,764] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-24 in 1 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,766] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-27 in 2 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,766] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,767] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-33 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,767] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,767] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-39 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,767] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,767] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-45 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:21:30,767] INFO [GroupMetadataManager brokerId=0] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-04-29 18:25:57,932] INFO [GroupCoordinator 0]: Preparing to rebalance group group.demo in state PreparingRebalance with old generation 18 (__consumer_offsets-12) (reason: Adding new member consumer-group.demo-1-6e7f0048-7df1-4574-8f74-31c7e1680f88 with group instanceid None) (kafka.coordinator.group.GroupCoordinator)
[2020-04-29 18:25:57,937] INFO [GroupCoordinator 0]: Stabilized group group.demo generation 19 (__consumer_offsets-12) (kafka.coordinator.group.GroupCoordinator)
[2020-04-29 18:25:57,945] INFO [GroupCoordinator 0]: Assignment received from leader for group group.demo for generation 19 (kafka.coordinator.group.GroupCoordinator)
[2020-04-29 18:26:23,049] INFO [GroupCoordinator 0]: Member consumer-group.demo-1-6e7f0048-7df1-4574-8f74-31c7e1680f88 in group group.demo has failed, removing it from the group (kafka.coordinator.group.GroupCoordinator)
[2020-04-29 18:26:23,050] INFO [GroupCoordinator 0]: Preparing to rebalance group group.demo in state PreparingRebalance with old generation 19 (__consumer_offsets-12) (reason: removing member consumer-group.demo-1-6e7f0048-7df1-4574-8f74-31c7e1680f88 on heartbeat expiration) (kafka.coordinator.group.GroupCoordinator)
[2020-04-29 18:26:23,051] INFO [GroupCoordinator 0]: Group group.demo with generation 20 is now empty (__consumer_offsets-12) (kafka.coordinator.group.GroupCoordinator)

最终,我在 https://www.orchome.com/32 找到一个看上去很不错的答案,但是很遗憾,依旧失败了,我强制 kill 也没用,依然会自动重新创建一个 kafka 服务…

至于文章中提到的 controlled.shutdown.enable,开启(true)后,执行shutdown时,broker主动将自己有leader身份的partition转移给ISR里的其他broker,但是如果是默认值false,对关闭依然不会有影响,按照默认的规则切换只是会慢一点点而已…

所以,很遗憾,没有解决…

Kafka 在 java 中使用

实现

maven 依赖

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka-clients</artifactId>
<version>2.4.0</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
<version>1.7.28</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-nop</artifactId>
<version>1.7.2</version>
</dependency>

生产者客户端

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
package Kafka.Demo1;

import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class ProducerFastStart {
public static final String brokerList = "localhost:9092";
public static final String topic = "topic-demo";

public static void main(String[] args) {
Properties properties = new Properties();
properties.put("key.serializer","org.apache.kafka.common.serialization.StringSerializer");
properties.put("value.serializer","org.apache.kafka.common.serialization.StringSerializer");
properties.put("bootstrap.servers",brokerList);
KafkaProducer<String,String> producer = new KafkaProducer<String, String>(properties);
// 构造要发送的消息
ProducerRecord<String, String> record = new ProducerRecord<>(topic,"hello,I am jerome_memory");

// 发送消息
try{
producer.send(record);
} catch (Exception e){
e.printStackTrace();
}
producer.close();

}
}

消费者客户端

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
package Kafka.Demo1;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.common.protocol.types.Field;

import java.time.Duration;
import java.util.Collections;
import java.util.Properties;

public class ConsumerFastStart {
public static final String brokerList = "localhost:9092";
public static final String topic = "topic-demo";
public static final String groupId = "group.demo";

public static void main(String[] args) {
Properties properties = new Properties();
properties.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
properties.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer");
properties.put("bootstrap.servers",brokerList);
properties.put("group.id",groupId);
KafkaConsumer<String,String> consumer = new KafkaConsumer<String, String>(properties);

// 订阅主题
consumer.subscribe(Collections.singletonList(topic));

// 循环读取消息
while (true){
ConsumerRecords<String,String> records = consumer.poll(Duration.ZERO);
for(ConsumerRecord<String,String> records1 : records){
System.out.println(records1.value());
}
}
}
}

遇到的问题

在启动生产者和消费者后,报错如下:「但是程序可以正常运行…」

1
2
3
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.

解决方案:

  1. 采用 其提示的 url 中的解决方案进行解决,失败。。。还是会有上面的错误…
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
<dependencies>
<dependency>
<groupId> org.apache.cassandra</groupId>
<artifactId>cassandra-all</artifactId>
<version>0.8.1</version>

<exclusions>
<exclusion>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-log4j12</artifactId>
</exclusion>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>

</dependency>
</dependencies>
  1. 参考:https://www.cnblogs.com/felixzh/p/12487644.html

    直接上slf4j官网找到相应的maven包添加到pom.xml

    网址:https://mvnrepository.com/artifact/org.slf4j/slf4j-log4j12/1.8.0-alpha2

    好吧… 依旧不好使

  2. 最终得以解决,以下是最终解决方案:

参考:https://www.shuzhiduo.com/A/pRdBO486zn/

在添加了以下依赖,还是会报错…「这里的版本我是通过查询 cd /usr/local/Cellar/kafka/2.4.0/libexec/lib/ 得到的 slf4j 和 log4j 的版本」

1
2
3
4
5
6
7
8
9
10
11
12
>         <dependency>
> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-log4j12</artifactId>
> <version>1.7.28</version>
> <scope>test</scope>
> </dependency>
> <dependency>
> <groupId>log4j</groupId>
> <artifactId>log4j</artifactId>
> <version>1.2.17</version>
> </dependency>
>

之后,看到了上面的参考文章,再次添加

1
2
3
4
5
6
> <dependency>
> <groupId>org.slf4j</groupId>
> <artifactId>slf4j-nop</artifactId>
> <version>1.7.2</version>
> </dependency>
>

得以解决该问题…

Thank you for your accept. mua!
-------------本文结束感谢您的阅读-------------