最近在做OpenStack控制节点高可用(三控)的测试,当关掉其中一个控制节点的时候,nova service-list 看到所有nova服务都是down的。 nova-compute的log中有大量这种错误信息:
盐边ssl适用于网站、小程序/APP、API接口等需要进行数据传输应用场景,ssl证书未来市场广阔!成为成都创新互联的ssl证书销售渠道,可以享受市场价格4-6折优惠!如果有意向欢迎电话联系或者加微信:028-86922220(备注:SSL证书合作)期待与您的合作!2016-11-08 03:46:23.887 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.275 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.276 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.276 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.277 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.277 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.278 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe 2016-11-08 03:46:27.278 127895 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 32] Broken pipe
上述抛出的异常在oslo_messaging/_drivers/impl_rabbit.py中定位出来了:
def _heartbeat_thread_job(self): """Thread that maintains inactive connections """ while not self._heartbeat_exit_event.is_set(): with self._connection_lock.for_heartbeat(): recoverable_errors = ( self.connection.recoverable_channel_errors + self.connection.recoverable_connection_errors) try: try: self._heartbeat_check() # NOTE(sileht): We need to drain event to receive # heartbeat from the broker but don't hold the # connection too much times. In amqpdriver a connection # is used exclusivly for read or for write, so we have # to do this for connection used for write drain_events # already do that for other connection try: self.connection.drain_events(timeout=0.001) except socket.timeout: pass except recoverable_errors as exc: LOG.info(_LI("A recoverable connection/channel error " "occurred, trying to reconnect: %s"), exc) self.ensure_connection() except Exception: LOG.warning(_LW("Unexpected error during heartbeart " "thread processing, retrying...")) LOG.debug('Exception', exc_info=True) self._heartbeat_exit_event.wait( timeout=self._heartbeat_wait_timeout) self._heartbeat_exit_event.clear()
原本heartbeat check就是来检测组件服务和rabbitmq server之间的连接是否是活着的,oslo_messaging中的heartbeat_check任务在服务启动的时候就跑在后台了,当关闭一个控制节点时,实际上也关闭了一个rabbitmq server节点。只不过这里会一直处于循环之中,一直抛出recoverable_errors捕获到的异常,只有当self._heartbeat_exit_event.is_set()才会退出while循环。按理说应该加个超时的东西,这样就就不会一直处于循环之中,过好几分钟后才恢复。
今天我在虚拟机中安装了三控高可用,在nova.conf中加了如下参数:
[oslo_messaging_rabbit]
rabbit_max_retries = 2 # 重连大次数
heartbeat_timeout_threshold = 0 # 禁止heartbeat check
测试,nova_compute 并不会一直抛出recoverable_errors捕获到的异常,nova service-list并不会出现所有服务down的情况。
后续有待在物理机上测试。。。。。。
另外有需要云服务器可以了解下创新互联scvps.cn,海内外云服务器15元起步,三天无理由+7*72小时售后在线,公司持有idc许可证,提供“云服务器、裸金属服务器、高防服务器、香港服务器、美国服务器、虚拟主机、免备案服务器”等云主机租用服务以及企业上云的综合解决方案,具有“安全稳定、简单易用、服务可用性高、性价比高”等特点与优势,专为企业上云打造定制,能够满足用户丰富、多元化的应用场景需求。
新闻名称:oslo_messaging中的heartbeat_check-创新互联
文章转载:http://lswzjz.com/article/dopdod.html