We have noticed some strange behaviour in deploying HDP stacks with Cloudbreak.
CB 2.7
HDP 2.6.5.0-292
Ambari: 2.6.2.0
Our scenario:
1. Cluster was already deployed and in a running state
2. We manually added some RPM's to one of the instances and added an iam role to the instance
3. Shut down the instances through CB and then shut down Cloudbreak (cbd kill)
4. Brought CB back up (cbd restart)
5. Brought the instances back up though CB
At this point, all of the instances that had the RPM's added were terminated and redeployed, no obvious logs from CB. This ended up breaking everything since one of them was Ambari. Is this expected behaviour from CB? If so, how do we manage customization on the instances? One example would be RPM's that need to be updated on some of the instances.
Answer by lnardai ·
Hi,
"We manually added some RPM's"
Can you share the specific list of the RPM's that were added?
"instances that had the RPM's added were terminated and redeployed"
Does this mean that after startup the RPM packages were reverted to their original version?
Do you have any autoscaling groups enabled on the cluster?
It's possible that salt reverted some packages during starting of the nodes or autoscale determined that the nodes are not in healthy condition.
This issue might needs further investigation, could you contact your support representative and open a ticket?
This website uses cookies for analytics, personalisation and advertising. To learn more or change your cookie settings, please read our Cookie Policy. By continuing to browse, you agree to our use of cookies.
HCC Guidelines | HCC FAQs | HCC Privacy Policy | Privacy Policy | Terms of Service
© 2011-2019 Hortonworks Inc. All Rights Reserved.
Hadoop, Falcon, Atlas, Sqoop, Flume, Kafka, Pig, Hive, HBase, Accumulo, Storm, Solr, Spark, Ranger, Knox, Ambari, ZooKeeper, Oozie and the Hadoop elephant logo are trademarks of the Apache Software Foundation.