Looking for a help in setting up kafka broker on single node. I am using HDP 2.5.0
Producer Node:
[root@sandbox ~]# /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list 10.74.58.106:6667 --topic test Hi [2017-05-15 11:47:24,559] ERROR Error when sending message to topic test with key: null, value: 2 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms. [2017-05-15 11:48:24,564] ERROR Error when sending message to topic test with key: null, value: 9 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback) org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
Conusmer Node:
[root@sandbox ~]# /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning {metadata.broker.list=sandbox.hortonworks.com:6667, request.timeout.ms=30000, client.id=console-consumer-18753, security.protocol=PLAINTEXT} {metadata.broker.list=sandbox.hortonworks.com:6667, request.timeout.ms=30000, client.id=console-consumer-18753, security.protocol=PLAINTEXT} {metadata.broker.list=sandbox.hortonworks.com:6667, request.timeout.ms=30000, client.id=console-consumer-18753, security.protocol=PLAINTEXT} {metadata.broker.list=sandbox.hortonworks.com:6667, request.timeout.ms=30000, client.id=console-consumer-18753, security.protocol=PLAINTEXT} {metadata.broker.list=sandbox.hortonworks.com:6667, request.timeout.ms=30000, client.id=console-consumer-18753, security.protocol=PLAINTEXT} {metadata.broker.list=sandbox.hortonworks.com:6667, request.timeout.ms=30000, client.id=console-consumer-18753, security.protocol=PLAINTEXT}
I also have changed the url from localhost to IP but same issue with different error.
[root@sandbox ~]# /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper 10.74.58.106:2181 --topic test --from-beginning [2017-05-15 11:46:28,215] WARN Session 0x0 for server null, unexpected error, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
Commands used /usr/hdp/current/kafka-broker/bin/kafka-console-producer.sh --broker-list 10.74.58.106:6667 --topic test
/usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --zookeeper 10.74.58.106:2181 --topic test --from-beginning
Answer by Samant Thakur ·
@Dinesh Das, I am also experiencing the same issue. Could you please let me know how did you resolve this issue?
Thanks a lot !
Answer by Balint Molnar ·
Can you please check if your topic is created properly with the following command:
/usr/hdp/current/kafka-broker/bin/kafka-topics.sh --zookeeper 10.74.58.106:2181 --list
Yes topic has been created properly
[root@sandbox ~]# /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --list --zookeeper localhost:2181 ATLAS_ENTITIES ATLAS_HOOK my-topic-01 pa1 pa1_dinesh test test_topic - marked for deletion
Hmm, My first guess is the kafka-broker is not listening on port 6667. You can check it using the ambari, or with
zkCli.sh -server localhost:2181 get /brokers/ids/<your broker id>
My second guess is, try with localhost:6667 instead of using an ip.
If this does not help either: try recreating the test topic:
kafka-topics.sh --zookeeper localhost:2181 --create --topic mytest --partitions 1 --replication-factor 1
This website uses cookies for analytics, personalisation and advertising. To learn more or change your cookie settings, please read our Cookie Policy. By continuing to browse, you agree to our use of cookies.
HCC Guidelines | HCC FAQs | HCC Privacy Policy | Privacy Policy | Terms of Service
© 2011-2019 Hortonworks Inc. All Rights Reserved.
Hadoop, Falcon, Atlas, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie and the Hadoop elephant logo are trademarks of the Apache Software Foundation.