ElastiCache를 사용하다보면
다른 ElastiCache와 replication 구성을 설정할 수 없어서
ElastiCache를 신규 환경으로 migration 하는 등의 작업이 필요할 때 난감해지곤 하는데요
이런 어려움을 해소하기 위한 tool이 있어 공유드리고자 합니다
(bigkeys 가 없는 환경에서만 사용하는 것이 좋을 것 같습니다)

테스트 배경

  • 특정 계정 ex) account1에 속한 DB 리소스들을 다른 계정의 VPC로 분리할 계획임
  • VPC peering이 되어도 ElastiCache ↔ ElastiCache 간에는 replication 기능이 막혀있어 replication을 통한 이관이 불가능함
  • 개발팀의 배치 개발 혹은 원본 소스의 스냅샷을 신규 VPC에서 복원하는 방법이 있으나 시간이 오래 소요되어 점검시간이 크게 길어지는 문제가 있음

RIOT-Redis란?

  • RIOT-Redis는 replication 기능 없이도 source의 데이터를 target으로 migration할 수 있는 tool로 판단되어 테스트 진행함

테스트 환경 1 (랜덤 데이터)

source ElastiCache  

  • engine version : 6.0.5
  • spec : r6g.large
  • endpoint(A02) : dbatest-source-ro.cache.amazonaws.com

target ElastiCache (PRIMARY)

  • engine version : 6.0.5
  • spec : r6g.large
  • endpoint(A02) : dbatest-target.cache.amazonaws.com

RIOT-Redis 

  • tool version : 2.15.4
  • java : openjdk 11.0.11
  • ElastiCache의 parameter group에서 notify-keyspace-events=KA 로 수정할 것
    • keyspace 내 모든 변경사항을 Pub/sub 으로 제공

테스트 환경 2 (실제 운영 데이터)

source ElastiCache  

* engine version : 6.0.5 
* spec : m6g.large
* used_memory_human:476.04M
* item : 180,000개
* endpoint(A01) : dbatest-test_cache-source.cache.amazonaws.com  

target ElastiCache 

* engine version : 6.0.5 
* spec : m6g.large
* endpoint(A01) : dbatest-test_cache.cache.amazonaws.com  

테스트 환경 3 (실제 운영 데이터)

source ElastiCache  

* engine version : 6.0.5 
* spec : r6g.4xlarge
* used_memory_human:53.91G
* item : 45,000,000
* endpoint(A01) : dbatest-test_cache2-source.cache.amazonaws.com  

target ElastiCache 

* engine version : 6.0.5 
* spec : r6g.4xlarge
* endpoint(A01) : dbatest-test_cache2.cache.amazonaws.com  

RIOT-Redis 

* tool version : 2.15.4 
* java : openjdk 11.0.11
* ElastiCache의 parameter group에서 notify-keyspace-events=KA 로 수정할 것
    * keyspace 내 모든 변경사항을 Pub/sub 으로 제공

테스트 결과

  • 랜덤 데이터 3GB, key count 4,153,277개
    • 소요시간 : 17분 소요
    • 리소스 : source에서 CPU 20~30%, dump 커맨드 관련 latency, slowquery가 발생함. network는 이슈없음
    • 극단적인 big key가 많을 때 여러 이슈가 발생했으며 데이터 migration이 안되는 경우도 발생했음
  • 운영 Redis1 실제 데이터 500MB, key count 180,000개
    • 소요시간 : 12초 소요
    • 리소스 : 1분 이내로 완료되어 모니터링안됨
  • 운영 Redis2 실제 데이터 53GB, key count 45,000,000개
    • 소요시간 : 41분 소요
    • 리소스 : source에서 CPU 20~30%, dump 커맨드 관련 latency, slowquery가 발생함. network는 이슈없음

사용 방법

설치 

  • 개인 mac에서 수행할때
$ brew tap AdoptOpenJDK/openjdk
$ brew install adoptopenjdk11 --cask

$ java --version
openjdk 11.0.11 2021-04-20
OpenJDK Runtime Environment AdoptOpenJDK-11.0.11+9 (build 11.0.11+9)
OpenJDK 64-Bit Server VM AdoptOpenJDK-11.0.11+9 (build 11.0.11+9, mixed mode)

$ riot-redis --version

      ▀        █     ██████████████████████████
 █ ██ █  ███  ████   ██████████████████████████
 ██   █ █   █  █     ██████████████████████████
 █    █ █   █  █     ██████████████████████████
 █    █  ███    ██   ██████████████████████████  v2.15.4

  • ec2에서 수행할때
$ sudo curl -L https://corretto.aws/downloads/latest/amazon-corretto-11-x64-linux-jdk.rpm -o jdk11.rpm
$ sudo yum localinstall jdk11.rpm
$ sudo /usr/sbin/alternatives --config java
$ java --version
$ git clone https://github.com/redis-developer/riot.git
$ cd riot/bin
$ ./riot-redis

riot-redis 설치 확인

$ riot-redis --help
Usage: riot-redis [OPTIONS] [COMMAND]
  -H, --help                Show this help message and exit
  -V, --version             Print version information and exit.
  -q, --quiet               Log errors only.
  -w, --warn                Set log level to warn.
  -i, --info                Set log level to info.
  -d, --debug               Log in debug mode (includes normal stacktrace).
      --stacktrace          Print out the stacktrace for all exceptions.
Redis connection options
  -h, --hostname=<host>     Server hostname (default: localhost).
  -p, --port=<port>         Server port (default: 6379).
  -s, --socket=<socket>     Server socket (overrides hostname and port).
      --user=<name>         Used to send ACL style 'AUTH username pass'. Needs password.
  -a, --pass[=<password>]   Password to use when connecting to the server.
  -u, --uri=<uri>...        Server URI.
      --timeout=<sec>       Redis command timeout (default: 60).
  -n, --db=<db>             Database number (default: 0).
  -c, --cluster             Enable cluster mode.
      --tls                 Establish a secure TLS connection.
      --insecure            Allow insecure TLS connection by skipping cert validation.
      --ks=<file>           Path to keystore.
      --ks-password[=<pwd>] Keystore password.
      --ts=<file>           Path to truststore.
      --ts-password[=<pwd>] Truststore password.
      --cert=<file>         X.509 certificate collection in PEM format.
      --latency             Show latency metrics.
      --[no-]auto-reconnect Auto reconnect on connection loss. True by default.
      --client=<name>       Client name used to connect to Redis.
Commands:
  replicate  Replicate a source Redis DB to a target Redis DB
  compare    Compare a target Redis database with a source Redis database and prints the differences
  info       Display INFO command output
  latency    Calculate latency stats
  ping       Execute PING command

테스트 데이터 준비

### 데이터 적재
$ redis-benchmark -h dbatest-source.cache.amazonaws.com -n 5000000 -r 6000000 -d 100 -c 1000 -q -t set,incr,sadd,hset,zadd

### source 데이터 확인
dbatest-source.cache.amazonaws.com:6379> info memory
# Memory
used_memory:2985505624
used_memory_human:2.78G

dbatest-source.cache.amazonaws.com:6379> info keyspace
# Keyspace
db0:keys=4153275,expires=0,avg_ttl=0

### bigkey
$ redis-cli --bigkeys -h dbatest-source.cache.amazonaws.com

# Scanning the entire keyspace to find biggest keys as well as
# average sizes per key type.  You can use -i 0.1 to sleep 0.1 sec
# per 100 SCAN commands (not usually needed).

[00.00%] Biggest string found so far '"counter:000012580161"' with 1 bytes
[00.00%] Biggest string found so far '"key:000005073476"' with 1000 bytes
[04.03%] Biggest set    found so far '"myset"' with 2075765 members
[11.34%] Biggest zset   found so far '"myzset"' with 2845947 members
[24.08%] Sampled 1000000 keys so far
[45.23%] Biggest hash   found so far '"myhash"' with 2076126 fields
[48.15%] Sampled 2000000 keys so far
[72.23%] Sampled 3000000 keys so far
[96.31%] Sampled 4000000 keys so far

-------- summary -------

Sampled 4153277 keys in the keyspace!
Total key length in bytes is 74758181 (avg len 18.00)

Biggest   hash found '"myhash"' has 2076126 fields
Biggest string found '"key:000005073476"' has 1000 bytes
Biggest    set found '"myset"' has 2075765 members
Biggest   zset found '"myzset"' has 2845947 members

0 lists with 0 items (00.00% of keys, avg size 0.00)
1 hashs with 2076126 fields (00.00% of keys, avg size 2076126.00)
4153274 strings with 1026317073 bytes (100.00% of keys, avg size 247.11)
0 streams with 0 entries (00.00% of keys, avg size 0.00)
1 sets with 2075765 members (00.00% of keys, avg size 2075765.00)
1 zsets with 2845947 members (00.00% of keys, avg size 2845947.00)

migration 

### RIOT-redis 수행
$ riot-redis --info --timeout=6000 -h dbatest-source-ro.cache.amazonaws.com -p 6379 replicate --type dump -h dbatest-target.cache.amazonaws.com -p 6379 --mode live

Listening  ? % │   █                                                                                           │ 3/? (0:00:54 / ?) .1/s
Scanning  55% │████████████████████████████████████████▉                                 │ 2294150/4153277 (0:08:11 / 0:06:38) 4672.4/s
.
.
Scanning  99% │█████████████████████████████████████████████████████████████████████████▉│ 4152150/4153278 (0:17:00 / 0:00:00) 4070.7/s
Step: [redis-scan-reader] executed in 17m0s869ms
Closing connection pool
Scanning  99% │█████████████████████████████████████████████████████████████████████████▉│ 4153250/4153278 (0:17:00 / 0:00:00) 4071.8/s875ms
Scanning 100% │██████████████████████████████████████████████████████████████████████████│ 4153279/4153279 (0:17:01 / 0:00:00) 4067.9/s
Step: [scan-replication-step] executed in 17m6s752ms
Closing redis-scan-reader
Closing connection pool

=> migration 중 변경되는 데이터는 Listening 으로 집계됨. 
위에서는 migration 중 세개의 데이터가 변경되었던 상황


### source monitor
1645771250.381977 [0 lua] "pttl" "key:000000416344"
1645771250.381982 [0 172.27.2.141:55994] "dump" "key:000000416344"
1645771250.381986 [0 172.27.2.141:55994] "evalsha" "e94431da78ccbc2870854839a78a5a2cb2e8e1cc" "1" "counter:000000529715"
1645771250.381994 [0 lua] "pttl" "counter:000000529715"
1645771250.381998 [0 172.27.2.141:55994] "dump" "counter:000000529715"
1645771250.382003 [0 172.27.2.141:55994] "evalsha" "e94431da78ccbc2870854839a78a5a2cb2e8e1cc" "1" "counter:000001817661"
1645771250.382011 [0 lua] "pttl" "counter:000001817661"
1645771250.382015 [0 172.27.2.141:55994] "dump" "counter:000001817661"
1645771250.382020 [0 172.27.2.141:55994] "evalsha" "e94431da78ccbc2870854839a78a5a2cb2e8e1cc" "1" "counter:000001611759"
1645771250.382028 [0 lua] "pttl" "counter:000001611759"
1645771250.382033 [0 172.27.2.141:55994] "dump" "counter:000001611759"
1645771250.382037 [0 172.27.2.141:55994] "evalsha" "e94431da78ccbc2870854839a78a5a2cb2e8e1cc" "1" "counter:000007329533"
1645771250.382045 [0 lua] "pttl" "counter:000007329533"
1645771250.382050 [0 172.27.2.141:55994] "dump" "counter:000007329533"
1645771250.382054 [0 172.27.2.141:55994] "evalsha" "e94431da78ccbc2870854839a78a5a2cb2e8e1cc" "1" "key:000000822262"
1645771250.382062 [0 lua] "pttl" "key:000000822262"
1645771250.382066 [0 172.27.2.141:55994] "dump" "key:000000822262"
1645771250.382070 [0 172.27.2.141:55994] "evalsha" "e94431da78ccbc2870854839a78a5a2cb2e8e1cc" "1" "key:000004851019"
1645771250.382078 [0 lua] "pttl" "key:000004851019"
1645771250.382082 [0 172.27.2.141:55994] "dump" "key:000004851019"

### target monitor
1645771250.393121 [0 172.27.2.141:55995] "restore" "key:000000416344" "0" "\x00\nVXKeHogKgJ\t\x00\x81{T\xe9\xb0\xdb\xb7\xaf" "REPLACE"
1645771250.393131 [0 172.27.2.141:55995] "restore" "counter:000000529715" "0" "\x00\xc0\x02\t\x00_P\xe1p\xacR\x1dz" "REPLACE"
1645771250.393139 [0 172.27.2.141:55995] "restore" "counter:000001817661" "0" "\x00\xc0\x01\t\x00\xf6\x8a\xb6z\x85\x87rM" "REPLACE"
1645771250.393148 [0 172.27.2.141:55995] "restore" "counter:000001611759" "0" "\x00\xc0\x01\t\x00\xf6\x8a\xb6z\x85\x87rM" "REPLACE"
1645771250.393157 [0 172.27.2.141:55995] "restore" "counter:000007329533" "0" "\x00\xc0\x01\t\x00\xf6\x8a\xb6z\x85\x87rM" "REPLACE"
1645771250.393165 [0 172.27.2.141:55995] "restore" "key:000000822262" "0" "\x00\nVXKeHogKgJ\t\x00\x81{T\xe9\xb0\xdb\xb7\xaf" "REPLACE"
1645771250.393175 [0 172.27.2.141:55995] "restore" "key:000004851019" "0" "\x00@dVXKeHogKgJ=[5V9_X^b?48OKF2jGA<f:iR@50o7dS3JV4Q6L68lC[GTA]0DaMg?_oSmcS2^N1J?ELSX@CfKQ7cM5aea\\ngY8a3LG\t\x00\xfc\xa4\xfe\xa5f\xe7\xe1!" "REPLACE"
1645771250.402728 [0 172.27.2.141:55995] "restore" "counter:000003030227" "0" "\x00\xc0\x01\t\x00\xf6\x8a\xb6z\x85\x87rM" "REPLACE"
1645771250.402739 [0 172.27.2.141:55995] "restore" "key:000001606623" "0" "\x00\nVXKeHogKgJ\t\x00\x81{T\xe9\xb0\xdb\xb7\xaf" "REPLACE"
1645771250.402749 [0 172.27.2.141:55995] "restore" "counter:000051953072" "0" "\x00\xc0\x01\t\x00\xf6\x8a\xb6z\x85\x87rM" "REPLACE"
1645771250.402757 [0 172.27.2.141:55995] "restore" "counter:000003123847" "0" "\x00\xc0\x01\t\x00\xf6\x8a\xb6z\x85\x87rM" "REPLACE"
1645771250.402766 [0 172.27.2.141:55995] "restore" "counter:000002542782" "0" "\x00\xc0\x01\t\x00\xf6\x8a\xb6z\x85\x87rM" "REPLACE"
=> dump & restore방식

### 데이터 sync 확인
$ riot-redis -h dbatest-source-ro.cache.amazonaws.com -p 6379 compare -h dbatest-target.cache.amazonaws.com -p 6379
Verifying  10% │████▍                                     │  432850/4153280 (0:02:39 / 0:22:49) 2722.3/s >0 T0 ≠0 ⧗00 (0:00:00 / ?) ?/s

>: # 소스 데이터베이스에만 있는 키
T: # 일치하지 않는 데이터 구조 유형
≠: # 일치하지 않는 값
⧗: # TTL 델타가 허용 오차보다 큰 키
<: # 대상 데이터베이스에만 있는 키

주의사항

  • riot-redis 수행 시 bigkeys 처리 하다 커넥션 끊김 key종류 상관없음 –timeout을 넉넉히 줘도 안됨 , 원인파악못함
Executing step: [redis-scan-reader]
Scanning   0% │                                                                                     │       0/4153281 (0:00:00 / ?) ?/s
Scanning  75% │████████████████████████████████████████████████████████                  │ 3143600/4153281 (0:10:20 / 0:03:19) 5070.3/s
Unexpected exception during request: java.io.IOException: Protocol wrong type for socket
Unexpected exception during request: java.io.IOException: Protocol wrong type for socket
.
.
.
Unexpected exception during request: java.io.IOException: Protocol wrong type for socket
Unexpected exception during request: java.io.IOException: Protocol wrong type for socket
Reconnecting, last destination was dbatest-target.cache.amazonaws.com/10.81.20.77:6379
  • hashset의 데이터 크기가 크면 인식 못하는 에러, 원인파악 못함
Executing step: [redis-scan-reader]
Scanning  ? % ││ 0/0 (0:00:00 / ?) ?/s
Job redis-scan-reader status: STARTED
Encountered an error executing step redis-scan-reader in job redis-scan-reader: minimumReadableBytes : -824121627 (expected: >= 0)
Step: [redis-scan-reader] executed in 26s428ms
Closing connection pool
Job: [SimpleJob: [name=redis-scan-reader]] completed with the following parameters: [{}] and the following status: [FAILED] in 26s435ms
Scanning  ? % ││ 0/0 (0:00:26 / ?) .0/s
Step: [scan-replication-step] executed in 32s293ms
Closing redis-scan-reader
Closing connection pool
  • source를 reader로 설정 시 key 개수가 안맞는 에러, 원인 파악 못함
Executing step: [redis-scan-reader]
Listening  ? % │                             █                                                                                                               │ 29/? (0:00:12 / ?) 2.4/s
Scanning  99% │██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ │ 174300/175594 (0:00:12 / 0:00:00) 14525.0/s
Step: [redis-scan-reader] executed in 12s295ms
Closing connection pool
Job: [SimpleJob: [name=redis-scan-reader]] completed with the following parameters: [{}] and the following status: [COMPLETED] in 12s305ms
Scanning  99% │██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏│ 174514/175594 (0:00:12 / 0:00:00) 14542.8/s
Step: [scan-replication-step] executed in 13s497ms
Listening  ? % │                                      █                                                                                                     │ 598/? (0:04:13 / ?) 2.4/s
Closing connection pool

### source key 개수
# Keyspace
db0:keys=175476,expires=174965,avg_ttl=26935428

### target key 개수
# Keyspace
db0:keys=174511,expires=174000,avg_ttl=26592457

=> 1,000개 정도 차이남

  • timeout 파라미터에 걸려 listening thread가 끊어지는 현상 => timeout 파라미터를 0으로 설정하여 해결 가능
Job: [SimpleJob: [name=redis-scan-reader]] launched with the following parameters: [{}]
Executing step: [redis-scan-reader]
Scanning 4% │████▊ │ 1810050/44875392 (0:01:41 / 0:40:05) 17921.3/s
Reconnecting, last destination was dbatest-test_cache2-source.cache.amazonaws.com  /100.1.28.193:6379
Scanning 8% │█████████▉ │ 3741400/44875392 (0:03:19 / 0:36:36) 18801.0/s
Listening ? % │ █ │ 2/? (0:04:13 / ?) .0/s
Scanning 14% │█████████████████▏ │ 6498400/44875392 (0:05:53 / 0:34:45) 18409.1/s
Reconnecting, last destination was dbatest-test_cache2.cache.amazonaws.com  /100.1.98.106:6379
Reconnected to dbatest-test_cache2.cache.amazonaws.com  :6379
Reconnecting, last destination was dbatest-test_cache2-source.cache.amazonaws.com  /100.1.28.193:6379
Scanning 18% │█████████████████████▊ │ 8237550/44875392 (0:07:33 / 0:33:38) 18184.4/s
Reconnecting, last destination was dbatest-test_cache2.cache.amazonaws.com  /100.1.98.106:6379
Reconnected to dbatest-test_cache2.cache.amazonaws.com  :6379
Reconnecting, last destination was dbatest-test_cache2-source.cache.amazonaws.com  /100.1.28.193:6379
Scanning 22% │██████████████████████████▎ │ 9900650/44875392 (0:09:15 / 0:32:40) 17839.0/s
Reconnecting, last destination was dbatest-test_cache2-source.cache.amazonaws.com  /100.1.28.193:6379
Reconnected to dbatest-test_cache2-source.cache.amazonaws.com  :6379
Reconnecting, last destination was dbatest-test_cache2.cache.amazonaws.com  /100.1.98.106:6379
Scanning 25% │██████████████████████████████▌ │ 11512750/44875392 (0:10:45 / 0:31:09) 17849.2/s
=> 현재 표준 파라미터에서 timeout 값을 100으로 설정하여, 
100초동안 source에 keyspace event 유입이 없으면 끊어졌다가 reconnect 하게됨
이때 누락되는 커맨드가 있을 수 있음

실제 운영 데이터 테스트 과정

migration 테스트 - test_cache

### bigkeys
-------- summary -------

Sampled 171883 keys in the keyspace!
Total key length in bytes is 5330306 (avg len 31.01)

Biggest   list found '"baemin-pay-code:BANK"' has 23 items
Biggest string found '"constraint.test_cache-group:5129660"' has 4336 bytes
Biggest    set found '"SUPER_test_cache"' has 1 members

2 lists with 33 items (00.00% of keys, avg size 16.50)
0 hashs with 0 fields (00.00% of keys, avg size 0.00)
171880 strings with 424200936 bytes (100.00% of keys, avg size 2468.01)
0 streams with 0 entries (00.00% of keys, avg size 0.00)
1 sets with 1 members (00.00% of keys, avg size 1.00)
0 zsets with 0 members (00.00% of keys, avg size 0.00)


### migration
$ ./riot-redis --info -h dbatest-test_cache-source.cache.amazonaws.com   -p 6379 replicate --type dump -h dbatest-test_cache.cache.amazonaws.com   -p 6379 --mode live

Executing step: [redis-scan-reader]
Listening  ? % │                                                                                 █                                                           │ 81/? (0:00:12 / ?) 6.8/s
Scanning  99% │██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋│ 181000/181410 (0:00:11 / 0:00:00) 16454.5/s
Step: [redis-scan-reader] executed in 11s382ms
Closing connection pool
Job: [SimpleJob: [name=redis-scan-reader]] completed with the following parameters: [{}] and the following status: [COMPLETED] in 11s404ms
Scanning  99% │██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉│ 181325/181410 (0:00:11 / 0:00:00) 16484.1/s
Step: [scan-replication-step] executed in 12s453ms
Listening  ? % │                                                                                                █                                          │ 3154/? (0:35:23 / ?) 1.5/s
Closing connection pool


### 데이터 sync 확인
$ ./riot-redis --info -h dbatest-test_cache-source.cache.amazonaws.com   -p 6379 compare -h dbatest-test_cache.cache.amazonaws.com   -p 6379

Executing step: [RedisItemReader]
Verifying   0% │                                                                                                                                      │      0/177320 (0:00:00 / ?) ?/s
Verifying  96% │████████████████████████████████████████████████████████████████████████████████████████▏  │ 171850/177320 (0:00:14 / 0:00:00) 12275.0/s >0 T0 ≠0 ⧗0
Step: [RedisItemReader] executed in 14s973ms
Verifying  99% │██████████████████████████████████████████████████████████████████████████████████████████▍│ 176300/177320 (0:00:15 / 0:00:00) 11753.3/s >0 T0 ≠0 ⧗0
Job: [SimpleJob: [name=RedisItemReader]] completed with the following parameters: [{}] and the following status: [COMPLETED] in 14s987ms
Verification completed - all OK
Verifying  99% │██████████████████████████████████████████████████████████████████████████████████████████▍│ 176344/177320 (0:00:15 / 0:00:00) 11756.3/s >0 T0 ≠0 ⧗0
Step: [verification] executed in 16s138ms
Closing RedisItemReader
Closing connection pool
Job: [SimpleJob: [name=compare]] completed with the following parameters: [{}] and the following status: [COMPLETED] in 16s159ms

migration 테스트 - test_cache2

### bigkeys
-------- summary -------

Sampled 44875392 keys in the keyspace!
Total key length in bytes is 2053397900 (avg len 45.76)

Biggest string found '"WWII_EVENT_test_cache_INIT:144232090"' has 8 bytes
Biggest    set found '"_CPN_CODE_STORE:8"' has 5000000 members

0 lists with 0 items (00.00% of keys, avg size 0.00)
0 hashs with 0 fields (00.00% of keys, avg size 0.00)
183 strings with 812 bytes (00.00% of keys, avg size 4.44)
0 streams with 0 entries (00.00% of keys, avg size 0.00)
44875209 sets with 373951344 members (100.00% of keys, avg size 8.33)
0 zsets with 0 members (00.00% of keys, avg size 0.00)


### migration
$ ./riot-redis --info -h dbatest-test_cache2-source.cache.amazonaws.com   -p 6379 replicate --type dump -h dbatest-test_cache2.cache.amazonaws.com   -p 6379 --mode live
Executing job redis-scan-reader
Scanning   0% │                                                                                                                                   │        0/44875392 (0:00:00 / ?) ?/s
Job: [SimpleJob: [name=redis-scan-reader]] launched with the following parameters: [{}]
Executing step: [redis-scan-reader]
Scanning   4% │████▊                                                                                                                  │  1810050/44875392 (0:01:41 / 0:40:05) 17921.3/s
.
.
.
Scanning  99% │██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▉│ 44875250/44875392 (0:40:52 / 0:00:00) 18301.5/s
Step: [redis-scan-reader] executed in 40m52s764ms
Closing connection pool
Job: [SimpleJob: [name=redis-scan-reader]] completed with the following parameters: [{}] and the following status: [COMPLETED] in 40m52s793ms
Scanning 100% │███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████│ 44875394/44875394 (0:40:52 / 0:00:00) 18301.5/s
Step: [scan-replication-step] executed in 40m53s980ms
Closing redis-scan-reader
Closing connection pool


### 데이터 sync 확인
$ ./riot-redis --info -h dbatest-test_cache2-source.cache.amazonaws.com   -p 6379 compare -h dbatest-test_cache2.cache.amazonaws.com   -p 6379
Verifying   4% │███▋                                                                                   │  1911200/44875395 (0:02:42 / 1:00:55) 11797.5/s >0 T0 ≠0 ⧗05 (0:00:00 / ?) ?/s
Verifying  99% │██████████████████████████████████████████████████████████████████████████████████████▉│ 44875350/44875395 (1:03:30 / 0:00:00) 11778.3/s >0 T0 ≠0 ⧗0
Verifying 100% │███████████████████████████████████████████████████████████████████████████████████████│ 44875395/44875395 (1:03:30 / 0:00:00) 11778.3/s >0 T0 ≠0 ⧗0