java - Ensure replication between data centres with Hazelcast -


i have application incorporating stretched hazelcast cluster deployed on 2 data centres simultaneously. 2 data centres both functional, but, @ times, 1 of them taken out of network sdn upgrades.

what intend achieve configure cluster in such way each main partition dc have @ least 2 backups - 1 in other cluster , 1 in current one.

for purpose, checking documentation pointed me toward direction of partition groups(http://docs.hazelcast.org/docs/2.3/manual/html/ch12s03.html). enterprise wan replication seemed thing wanted, but, unfortunately, feature not available free version of hazelcast.

my configuration follows:

    networkconfig network = config.getnetworkconfig();      network.setport(hzclusterconfigs.getport());     joinconfig join = network.getjoin();     join.getmulticastconfig().setenabled(hzclusterconfigs.ismulticastenabled());     join.gettcpipconfig()             .setmembers(hzclusterconfigs.getclustermembers())             .setenabled(hzclusterconfigs.istcpipenabled());     config.setnetworkconfig(network);      partitiongroupconfig partitiongroupconfig = config.getpartitiongroupconfig()             .setenabled(true).setgrouptype(partitiongroupconfig.membergrouptype.custom)             .addmembergroupconfig(new membergroupconfig().addinterface(hzclusterconfigs.getclusterdc1interface()))             .addmembergroupconfig(new membergroupconfig().addinterface(hzclusterconfigs.getclusterdc2interface()));     config.setpartitiongroupconfig(partitiongroupconfig); 

the configs used were:

clustermembers=host1,host2,host3,host4 clusterdc1interface=10.10.1.* clusterdc2interface=10.10.1.* 

however, set of configs @ event triggered when changing components of cluster, random node in cluster started logging "no member group available assign partition ownership" every other second (as here: https://github.com/hazelcast/hazelcast/issues/5666). more, checking state exposed partitionservice in jmx revealed no partitions getting populated, despite apparently successful cluster state.

as such, proceeded replacing hostnames corresponding ips , configuration worked. partitions getting created , no nodes acting out.

the problem here boxes created part of a/b deployment process , ips automatically range of 244 ips. adding 244 ips seems bit much, if done programatically chef , not manually, because of network noise entail. checking @ every deployment using telnet-based client machines listening on hazelcast port seems bit problematic, since ips different deployment , ourselves situation in part of nodes in cluster have member list , part have different member list @ same time.

using hostnames best solution, in opinion, because rely on dns resolution , wouldn't need wrap our heads around ip resolution @ provisioning time.

does know of workaround group config issue? or, perhaps, alternative achieve same behavior?

this not possible @ moment. backup groups cannot designed way have backup of themselves. workaround might able design 4 groups in case there no guarantee 1 backup on each datacenter, @ least not without using 3 backups.

anyhow in general not recommend spread hazelcast cluster on multiple datacenters, except specific situation dcs interconnected in way similar lan network , redundancy set up.


Comments

Popular posts from this blog

angular - Ionic slides - dynamically add slides before and after -

minify - Minimizing css files -

Add a dynamic header in angular 2 http provider -