Developer: Zhu Wenjie
- Annotation: please add . before idea, because some dependence is in it for my mistake.
- Annotation: please keep at least two data per slave, otherwise loadbalance will not select, and primary won't be decided.
- Annotation: zookeeper cluster not committed because getContent will get old data if setContent concurrently, maybe lock can help.
A distributed key-value memory storage system,providing basic data sharding based on master/slave architecture.
Request will be dispatched according to the key, decided by consistent hash map.
When adding slave, the system will re-sharp the data without halting. Router will remember the history.
TOW means that we only need to transfer the data when writing them.
- Read: Router decide
- Write: Hash decide
All of the data is keep consistent by 2PC, the coordinator will read primary node and write all primary and backup nodes.
The group of 2PC is marked by group property in Dubbo.
Set group.hotfix = 1, the new deployed Backup will automatically find its group's primary node and get SYNCed.
After disconnecting the primary node, the first of all providers will become new primary node.
Implemented by watching zookeeper node.
- Separate service and data
- All of the services are stateless and can be added or destroyed dynamically.
- All of the datas have mirrors and can be added or destroyed dynamically.
Run zookeeper at 2181 port.
- Set groupid in the property file as 0 or 1, and run DataApplication twice and Slave Application once for each case, then run RouterDataApplication/RouterCoordinatorApplication/MasterApplication twice.
- Run client to observe the workload.
- Set groupid in the property file as 2, and run DataApplication twice and Slave Application once.
- Run client to observe the workload. How does it change?
- Stop RouterDataApplication/RouterCoordinatorApplication/MasterApplication once.
- Run client to observe the workload. Does it change?
- Set groupid in the property file as 2 and hotfix as 1, and run DataApplication.
- Run client to observe the workload. Is new backup SYNCed?
- Stop group 2 DataApplication which was read.
- Run client(Just read Dont put) to observe the workload. What is the new primary node?
- zookeeper
- dubbo