V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
coolloves
V2EX  ›  Redis

redis 集群搭建求助

  •  
  •   coolloves · 2021-07-01 10:03:07 +08:00 · 1069 次点击
    这是一个创建于 1001 天前的主题,其中的信息可能已经有所发展或是发生改变。
    版本
    Redis server v=4.0.14 sha=00000000:0 malloc=jemalloc-4.0.3 bits=64 build=25108fc8c0aa24db
    
    用三台机器搭建一个测试 redis 集群
    192.168.11.225
    192.168.11.226
    192.168.11.227
    
    
    关闭了 selinux,关闭了 firewall,开放了+10000 的端口,现在就卡在了如下的地方,
    >>> Creating cluster
    >>> Performing hash slots allocation on 6 nodes...
    Using 3 masters:
    192.168.11.225:6379
    192.168.11.226:6379
    192.168.11.227:6379
    Adding replica 192.168.11.226:6380 to 192.168.11.225:6379
    Adding replica 192.168.11.227:6380 to 192.168.11.226:6379
    Adding replica 192.168.11.225:6380 to 192.168.11.227:6379
    M: c695368eeb6ca56856eb40497888486edd1c011a 192.168.11.225:6379
       slots:0-5460 (5461 slots) master
    M: 3105b30565cc2bcff62d8816f2439417d388249e 192.168.11.226:6379
       slots:5461-10922 (5462 slots) master
    M: b7dadc6fae9b582e5180f659a6f29e662de21cc8 192.168.11.227:6379
       slots:10923-16383 (5461 slots) master
    S: 0627db6989cdfe0f839e6a0ed4fb9edee8f85e92 192.168.11.225:6380
       replicates b7dadc6fae9b582e5180f659a6f29e662de21cc8
    S: 10d90234c741b4b60666501a4c64e72353cc2902 192.168.11.226:6380
       replicates c695368eeb6ca56856eb40497888486edd1c011a
    S: 02cfac1a71c202ee0c0dabc3b7fd58076663074c 192.168.11.227:6380
       replicates 3105b30565cc2bcff62d8816f2439417d388249e
    Can I set the above configuration? (type 'yes' to accept): yes
    >>> Nodes configuration updated
    >>> Assign a different config epoch to each node
    >>> Sending CLUSTER MEET messages to join the cluster
    Waiting for the cluster to join.............................................................................................^C./redis-trib.rb:653:in `sleep': Interrupt
            from ./redis-trib.rb:653:in `wait_cluster_join'
            from ./redis-trib.rb:1436:in `create_cluster_cmd'
            from ./redis-trib.rb:1830:in `<main>'
    
    

    以下为配置文件样本,其他机器类似

    port 6379
    daemonize yes
    protected-mode no
    
    bind 192.168.11.225
    pidfile /var/run/redis_cm.pid
    
    logfile /usr/local/redis/log/redis_cm.log
    
    dir /usr/local/redis/data_m/
    
    appendonly yes
    
    
    cluster-enabled yes 
    cluster-node-timeout 15000
    cluster-config-file /usr/local/redis/etc/nodes_6379.conf 
    loglevel debug
    
    第 1 条附言  ·  2021-07-02 10:03:42 +08:00
    最终发现,某一台的版本不一致导致的,结案
    thet
        1
    thet  
       2021-07-01 10:26:58 +08:00
    cluster info 和 cluster nodes 看看集群状态,看起了 CLUSTER MEET 那一步就卡住了
    coolloves
        2
    coolloves  
    OP
       2021-07-01 11:18:15 +08:00
    ```
    192.168.11.225:6379> CLUSTER nodes
    c695368eeb6ca56856eb40497888486edd1c011a 192.168.11.225:6379 myself,master - 0 0 1 connected 0-5460
    0627db6989cdfe0f839e6a0ed4fb9edee8f85e92 192.168.11.225:6380 master - 0 1625109373893 4 connected
    192.168.11.225:6379> CLUSTER info
    cluster_state:fail
    cluster_slots_assigned:5461
    cluster_slots_ok:5461
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:2
    cluster_size:1
    cluster_current_epoch:4
    cluster_my_epoch:1
    cluster_stats_messages_sent:9009
    cluster_stats_messages_received:9013

    192.168.11.226:6379> CLUSTER nodes
    3105b30565cc2bcff62d8816f2439417d388249e :6379@16379 myself,master - 0 0 2 connected 5461-10922
    192.168.11.226:6379> CLUSTER info
    cluster_state:fail
    cluster_slots_assigned:5462
    cluster_slots_ok:5462
    cluster_slots_pfail:0
    cluster_slots_fail:0
    cluster_known_nodes:1
    cluster_size:1
    cluster_current_epoch:2
    cluster_my_epoch:2
    cluster_stats_messages_meet_sent:1
    cluster_stats_messages_sent:1
    cluster_stats_messages_received:0


    11.225 6379 的日志如下
    199519:M 01 Jul 11:17:35.182 . --- Processing packet of type 1, 2208 bytes
    199519:M 01 Jul 11:17:35.182 . pong packet received: 0x7fd2be5a0c00
    199519:M 01 Jul 11:17:36.185 . Pinging node 0627db6989cdfe0f839e6a0ed4fb9edee8f85e92
    199519:M 01 Jul 11:17:36.185 . --- Processing packet of type 1, 2208 bytes
    199519:M 01 Jul 11:17:36.186 . pong packet received: 0x7fd2be5a0c00
    199519:M 01 Jul 11:17:36.981 . --- Processing packet of type 0, 2208 bytes
    199519:M 01 Jul 11:17:36.981 . Ping packet received: (nil)
    199519:M 01 Jul 11:17:36.981 . ping packet received: (nil)
    199519:M 01 Jul 11:17:37.191 . Pinging node 0627db6989cdfe0f839e6a0ed4fb9edee8f85e92
    199519:M 01 Jul 11:17:37.191 . --- Processing packet of type 1, 2208 bytes
    199519:M 01 Jul 11:17:37.191 . pong packet received: 0x7fd2be5a0c00
    199519:M 01 Jul 11:17:37.985 . --- Processing packet of type 0, 2208 bytes
    199519:M 01 Jul 11:17:37.985 . Ping packet received: (nil)
    199519:M 01 Jul 11:17:37.986 . ping packet received: (nil)
    199519:M 01 Jul 11:17:38.194 . Pinging node 0627db6989cdfe0f839e6a0ed4fb9edee8f85e92
    199519:M 01 Jul 11:17:38.194 . --- Processing packet of type 1, 2208 bytes
    199519:M 01 Jul 11:17:38.194 . pong packet received: 0x7fd2be5a0c00
    199519:M 01 Jul 11:17:38.986 . --- Processing packet of type 0, 2208 bytes
    199519:M 01 Jul 11:17:38.986 . Ping packet received: (nil)
    199519:M 01 Jul 11:17:38.987 . ping packet received: (nil)
    199519:M 01 Jul 11:17:39.195 . Pinging node 0627db6989cdfe0f839e6a0ed4fb9edee8f85e92
    199519:M 01 Jul 11:17:39.196 . --- Processing packet of type 1, 2208 bytes
    199519:M 01 Jul 11:17:39.196 . pong packet received: 0x7fd2be5a0c00
    199519:M 01 Jul 11:17:39.297 - 0 clients connected (0 slaves), 1173864 bytes in use

    ```
    coolloves
        3
    coolloves  
    OP
       2021-07-02 10:03:52 +08:00
    最终发现,某一台的版本不一致导致的,结案
    关于   ·   帮助文档   ·   博客   ·   API   ·   FAQ   ·   我们的愿景   ·   实用小工具   ·   5470 人在线   最高记录 6543   ·     Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 · 42ms · UTC 08:27 · PVG 16:27 · LAX 01:27 · JFK 04:27
    Developed with CodeLauncher
    ♥ Do have faith in what you're doing.