{"_id":"56a9d3d43b04f20d00eccaa5","__v":14,"githubsync":"","user":"55e84e0f0693802d00bc6952","version":{"_id":"568a404e050eb50d00c07998","project":"568a404d050eb50d00c07995","__v":2,"createdAt":"2016-01-04T09:50:06.218Z","releaseDate":"2016-01-04T09:50:06.218Z","categories":["568a404e050eb50d00c07999","56cc2b81272aa4130002cce9"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"project":"568a404d050eb50d00c07995","category":{"_id":"568a404e050eb50d00c07999","__v":7,"pages":["568a404f050eb50d00c0799b","5698f130cb127f0d003cc06a","5698f1483da4370d009d2079","569e6013d233620d00705550","569f3e578f6d4b0d00f13bd5","56a9d3d43b04f20d00eccaa5","56cc2b9e94c8f00b00b83d76"],"project":"568a404d050eb50d00c07995","version":"568a404e050eb50d00c07998","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-01-04T09:50:06.816Z","from_sync":false,"order":9999,"slug":"documentation","title":"Documentation"},"updates":["56bd927fe0b1580d00b5d1c3"],"next":{"pages":[],"description":""},"createdAt":"2016-01-28T08:39:48.011Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":5,"body":"The BlazingCache server is the process who coordinates clients. \n\nThe main purpose of this coordination is to propagate *mutations* (invalidations and updates) from one client to other clients. The coordinator is also in charge of handling distributed locks and in general to guarantee the consistency of the state of the clients.\n\nIf a client issues a **fetch** then is the server who will proxy the fetch request of the client to other clients, in order to find and up-to-date value for the requested entry.\n\nEach clients keeps only a TCP connection open to the cache server, on the server side connections are managed using non blocking IO using Netty. There is no thread dedicated to specific clients and the protocol is totally asynchronous: this design lets a single cache server manage hundreds of clients without problems.\n\nThe server mantains in memory the list of keys known by each client and using this information it can route requests only to interested clients; this way the memory used by the cache server is minimal and notifications are sent only to useful clients reducing dramatically the network load.\n\nThe server takes care of the coordination of entry expiration, and so it has to keep in memory for each *mortal* entry the expected expiration deadline. Expiration uses a dedicated thread which runs at a configurable period of time.\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Replication\"\n}\n[/block]\nYou can start as many servers you want but actually only one, the **leader**, will work. Other servers will be in **backup** status and will replace the leader immediately in case of failure in order to keep the system up.\n\nIn fact without a live connection to a CacheServer the CacheClient keeps its local cache empty and blocks activities for every mutation to the entry set.\n\nIn production you will most likely be running at most two or three servers. It is optimal to have the hardware and software configuration of each server equal to each others, because in case of failover the backup server will immediately take on all the traffic of the previous leader.\n\nCache servers do not talk to each other, they only elect a leader using a Zookeeper ensemble.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Configuration\"\n}\n[/block]\nIn the standard binary package the configuration is stored in \"conf/server.properties\".\nIn order to apply new configuration you have to restart the service.\nUsually you will update backup servers, restart it and then do the same to the leader, which in turn will become a backup.\n\n# Parameters:\n\n*     **clustering.mode**: unique id of the task (java 64bit signed long).\n  *      **singleserver** disables replication, no need for zookeeper, but client will need to be configured using the address of this server.\n  *      **clustered**  enable replication, you will need a zookeeper ensemble, client will discover the leader from zookeeper.\n*     ** server.host**:  Local address to listen for connection. This is the same name advertised on zookeeper for clients.\n*     **server.port**: Local TCP port to listen on. Default value is 1025.\n*     **server.ssl**: true/false. Enable/disable SSL.\n*     **server.ssl.certificatefile**: Path of a PKCS12 file which contains the SSL certificate for the SSL endpoint. If not provided a self-signed certificate will be generated at every boot.\n*     **server.ssl.certificatechainfile**: Path of the Certificate Chain File supporting the SSL certificate, it may be left blank.\n*     **server.ssl.certificatefilepassword**: Password for the SSL certificate\n*     **server.ssl.ciphers**: Limit the list of SSL Ciphers to use, leave blank to let the JVM + Netty decide for you.\n*     **server.jmx**. Enable registration on JMX of the MBean providing server status values. Default value is true. \n*     **zk.address**: Zookeeper connection string (only for clustered mode). It defaults to localhost:1281. See Zookeeper docs for more info.\n*     **zk.sessiontimeout**: Zookeeper initial session timeout. It defaults to 40000 (ms). See Zookeeper docs for more info.\n*     **zk.path**: Base path of Zookeeper filesystem. It defaults to */blazingcache*. Usually you are not going to change this value.\n*     **sharedsecret**: Shared secret that clients need to use to access the server, and so receive data from other clients or push data to them. It defaults to *blazingcache*. You **MUST** change this value in production.\n*     **io.worker.threads**: Number of Netty worker threads. It default to 16 which we discovered to be a good value for any purpose. If you set io.worker.threads to 0 Netty will use its defaults.\n*     **netty.callback.threads**: Size of the internal thread pool for handling callbacks on netty channels. It defaults to 64. If you set netty.callback.threads to 0 the system will use an unlimited thread pool.\n*     **channelhandlers.threads**: Size of the internal thread pool for handling callbacks. It defaults to 64. If you set channelhandlers.threads to 0 the system will use an unlimited thread pool.","excerpt":"","slug":"server-configuratio","type":"basic","title":"Servers"}
The BlazingCache server is the process who coordinates clients. The main purpose of this coordination is to propagate *mutations* (invalidations and updates) from one client to other clients. The coordinator is also in charge of handling distributed locks and in general to guarantee the consistency of the state of the clients. If a client issues a **fetch** then is the server who will proxy the fetch request of the client to other clients, in order to find and up-to-date value for the requested entry. Each clients keeps only a TCP connection open to the cache server, on the server side connections are managed using non blocking IO using Netty. There is no thread dedicated to specific clients and the protocol is totally asynchronous: this design lets a single cache server manage hundreds of clients without problems. The server mantains in memory the list of keys known by each client and using this information it can route requests only to interested clients; this way the memory used by the cache server is minimal and notifications are sent only to useful clients reducing dramatically the network load. The server takes care of the coordination of entry expiration, and so it has to keep in memory for each *mortal* entry the expected expiration deadline. Expiration uses a dedicated thread which runs at a configurable period of time. [block:api-header] { "type": "basic", "title": "Replication" } [/block] You can start as many servers you want but actually only one, the **leader**, will work. Other servers will be in **backup** status and will replace the leader immediately in case of failure in order to keep the system up. In fact without a live connection to a CacheServer the CacheClient keeps its local cache empty and blocks activities for every mutation to the entry set. In production you will most likely be running at most two or three servers. It is optimal to have the hardware and software configuration of each server equal to each others, because in case of failover the backup server will immediately take on all the traffic of the previous leader. Cache servers do not talk to each other, they only elect a leader using a Zookeeper ensemble. [block:api-header] { "type": "basic", "title": "Configuration" } [/block] In the standard binary package the configuration is stored in "conf/server.properties". In order to apply new configuration you have to restart the service. Usually you will update backup servers, restart it and then do the same to the leader, which in turn will become a backup. # Parameters: * **clustering.mode**: unique id of the task (java 64bit signed long). * **singleserver** disables replication, no need for zookeeper, but client will need to be configured using the address of this server. * **clustered** enable replication, you will need a zookeeper ensemble, client will discover the leader from zookeeper. * ** server.host**: Local address to listen for connection. This is the same name advertised on zookeeper for clients. * **server.port**: Local TCP port to listen on. Default value is 1025. * **server.ssl**: true/false. Enable/disable SSL. * **server.ssl.certificatefile**: Path of a PKCS12 file which contains the SSL certificate for the SSL endpoint. If not provided a self-signed certificate will be generated at every boot. * **server.ssl.certificatechainfile**: Path of the Certificate Chain File supporting the SSL certificate, it may be left blank. * **server.ssl.certificatefilepassword**: Password for the SSL certificate * **server.ssl.ciphers**: Limit the list of SSL Ciphers to use, leave blank to let the JVM + Netty decide for you. * **server.jmx**. Enable registration on JMX of the MBean providing server status values. Default value is true. * **zk.address**: Zookeeper connection string (only for clustered mode). It defaults to localhost:1281. See Zookeeper docs for more info. * **zk.sessiontimeout**: Zookeeper initial session timeout. It defaults to 40000 (ms). See Zookeeper docs for more info. * **zk.path**: Base path of Zookeeper filesystem. It defaults to */blazingcache*. Usually you are not going to change this value. * **sharedsecret**: Shared secret that clients need to use to access the server, and so receive data from other clients or push data to them. It defaults to *blazingcache*. You **MUST** change this value in production. * **io.worker.threads**: Number of Netty worker threads. It default to 16 which we discovered to be a good value for any purpose. If you set io.worker.threads to 0 Netty will use its defaults. * **netty.callback.threads**: Size of the internal thread pool for handling callbacks on netty channels. It defaults to 64. If you set netty.callback.threads to 0 the system will use an unlimited thread pool. * **channelhandlers.threads**: Size of the internal thread pool for handling callbacks. It defaults to 64. If you set channelhandlers.threads to 0 the system will use an unlimited thread pool.