{"_id":"569e6013d233620d00705550","user":"55e84e0f0693802d00bc6952","version":{"_id":"568a404e050eb50d00c07998","project":"568a404d050eb50d00c07995","__v":2,"createdAt":"2016-01-04T09:50:06.218Z","releaseDate":"2016-01-04T09:50:06.218Z","categories":["568a404e050eb50d00c07999","56cc2b81272aa4130002cce9"],"is_deprecated":false,"is_hidden":false,"is_beta":false,"is_stable":true,"codename":"","version_clean":"1.0.0","version":"1.0"},"project":"568a404d050eb50d00c07995","__v":24,"category":{"_id":"568a404e050eb50d00c07999","__v":7,"pages":["568a404f050eb50d00c0799b","5698f130cb127f0d003cc06a","5698f1483da4370d009d2079","569e6013d233620d00705550","569f3e578f6d4b0d00f13bd5","56a9d3d43b04f20d00eccaa5","56cc2b9e94c8f00b00b83d76"],"project":"568a404d050eb50d00c07995","version":"568a404e050eb50d00c07998","sync":{"url":"","isSync":false},"reference":false,"createdAt":"2016-01-04T09:50:06.816Z","from_sync":false,"order":9999,"slug":"documentation","title":"Documentation"},"updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-01-19T16:10:59.499Z","link_external":false,"link_url":"","githubsync":"","sync_unique":"","hidden":false,"api":{"examples":{"codes":[]},"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":2,"body":"The BlazingCache Java Native Client is the low level object which manages the connection to the Cache Server (coordinator).\n\nThe Cache Client essentially manages a \"near cache\", that is a local copy of data that you have on another resource, for instance a database. The term \"near\" means that the data is stored inside the same JVM which executes the CacheClient object.\nIn BlazingCache the Cache \"Server\" is only a coordinator for CacheClients, so there aren't \"server\" which hosts a copy of the cached data.\n\nIn order to configure the CacheClient you only have to supply the parameters for the discovery of the CacheServer and for the authentication. Usually you will instance only a CacheClient per JVM. \n\nThe CacheClient will do whatever it's possible to keep active a connection to the leader CacheServer. Only one TCP connection is needed: all the communication is asynchronous and the Client will multiplex all RPC calls on the shared TCP connection.\n\nThe cache which is managed by the CacheClient is really \"raw\", it is in fact a Map<String,byte[]>, that is that 'keys' are always Java (unicode) strings and 'values' are always byte[].\nThere is no facility for Object serialization, you need to implement your own way to serialize your objects.\n\nIf you need a more simple client just use the **JSR107 client** and you will get a generic Cache<K,T> which will handle serialization of keys and value and local \"separation\" between different Caches (it simply will prefix all your keys with the cache name, but from the client's point of view you will see several different caches).\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Booting the Client\"\n}\n[/block]\nIn order to boot the CacheClient you are encouraged to use the ClientBuilder class.\nFirst add the following Maven dependency to your project (or download the Jar from the Maven Central)\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"<dependency>\\n   <groupId>org.blazingcache</groupId>\\n   <artifactId>blazingcache-core</artifactId>\\n   <version>VERSION</version>\\n</dependency>\",\n      \"language\": \"xml\"\n    }\n  ]\n}\n[/block]\nThen you can use the following example\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"try (CacheClient client\\n                = CacheClientBuilder\\n                .newBuilder()                \\n                .maxMemory(1024 * 1024 * 1024)\\n                .build();) {\\nclient.start();\\nclient.waitForConnection(10000);\\nclient.put(\\\"foo\\\", \\\"bar\\\".getBytes(), System.currentTimeMillis() + 10000);\\nString result = new String(client.fetch(\\\"foo\\\").getSerializedData());\\nSystem.out.println(\\\"result:\\\" + result);\\n}\",\n      \"language\": \"java\"\n    }\n  ]\n}\n[/block]\nThis simple code starts an \"embedded\" CacheServer on your JVM and create a CacheClient.\nSee the CacheClient configuration reference for available configuration options about cache properties, discovery and authentication.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Get vs Fetch\"\n}\n[/block]\nYou have two ways to read data from the cache: **get** and **fetch**.\nIf you use \"get\" then you will read only data from the local cache.\nIf you use \"fetch\" and the data is not present locally then the client will ask another client for the same entry: this operation will let you use the cache of other clients instead of reading from the original source (a database for instance).\n\nSometimes it is better to use a \"fetch\" and sometimes to use a \"get\", it depends from your data and from your clients, there is no magic recipe.\n\nBeware that a cache is really volatile and the system can decide at any time to evict entries from the cache.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Fetch priority\"\n}\n[/block]\nFetch priority is the value used by the server in order to prioritize some client over others regarding fetch. \nMore precisely, the server chooses the remote client by starting from client that have the highest priority value (in case a value is not set, 10 is adopted as a default).\n\nThis is a very useful property in case of clients subject to long GC pauses that may causes very slow fetch responses and, in some cases, hangs on the distributed cache. If you need to remove a client from the list of clients available for remote fetches altogether you may set the fetch priority to 0 (disabled).\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Put\"\n}\n[/block]\nIn order to populate the cache you have to issue the **put** command.\nThe put command takes three arguments: the key (a String), the value to store (a byte[]) and an optional expire timestamp. \n\nEntries live in the cache for the specified time, if you need to change the expire timestamp you can use the **touch** command. If you leave the expire timestamp to 0 it means that the entry is immortal.\n\nBeware that the cache will refer *locally* directly to the byte[] value you passed so you will never change the content of this array, any change will lead to an unspecified behavior. At any time that the value reaches other JVM (for example by means of a fetch or a remote put) it is copied locally.\n\nIn order to guarantee that data is always up-to-date when issuing \"get\" commands any time you issue a \"put\" command the new value is sent to the CacheServer which in turn notifies the new value to all the other clients which are known to have the same entry in their local cache. Note that the put will block until every client has acked the notification.\nOnly clients which held the entry in memory will be notified, so if the entry is stored only by the local JVM the \"put\" command will return as the server acks the registration of the presence of the entry in the local JVM.\n\nIn case of concurrent puts of different values the system will detect a **conflict** and then invalidate the entry.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Load\"\n}\n[/block]\n**Load** allows to populate the cache in a way very similar to put: it differs in that no invalidation will be requested to other clients.\n\nThis command allows a client to cache an entry with no need to update the other peers with the same entry: the entry is created or updated in the issuing client and it will be subject to potential invalidation whenever a put command is issued on a peer client with the same target entry as the entry loaded on the issuing client.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Put vs Load\"\n}\n[/block]\nThe main difference between put and load lies in usage of fetch priority \n**Put command does not consider fetch priority**: when the new value is inserted in the cache, every other client featuring the same entry is invalidated. In this way, in case of slow clients (e.g. clients with fetch priority equals to 0), the issuing client must wait for \"slow\" client's ack. In this scenario, the overall performance is determined by slow client speed, making the fetch priority useless.\n\nOn the other hand, **load** overcomes the aforementioned shortcoming **avoiding the request of cache invalidation** on the other peer clients.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Invalidation\"\n}\n[/block]\nWhen you update or delete data on your database or you detect that cache data has become \"stale\" you have to invalidate the cache entry which represents the data you changed.\n\nThe operation is **invalidate** which will remove the cached value from the local JVM and will notify the CacheServer that the entry is no more valid. The CacheServer, similarly as for \"puts\", will in turn notify every other client which is known to have a copy of the entry. Note that the invalidate operation will block until every other client has acked the notification.\n\nUsually with BlazingCache you are going to partition data in the cache by using a prefix for each key, so there is a **invalidateByPrefix** special operation which will invalidate every entry whose key *startsWith* the given prefix. Beware that the invalidateByPrefix command will be broadcasted to every connected client, so this kind of operation needs to be acked by every client, not only by clients which hold a key which matches the prefix.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Expiration\"\n}\n[/block]\nEach entry can have an expiration timestamp, which is an instant in time after which the entry should be automatically invalidated.\n\nExpiration is executed by the CacheServer which holds in memory the expiration timestamp for every entry which is not immortal. Note that every \"put\" and \"touch\" operation will register the new expiration timestamp to the CacheServer.\n\nExpiration operations are driven by the CacheServer in order to have a central point which is the reference for time in the distributed system.\n\nExpiration is an optional feature, if you do not set the expiration timestamp your entries will never expire.\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Eviction\"\n}\n[/block]\nEviction is the process to remove entries from the local JVM mainly in order to free memory. In BlazingCache there are actually two reasons for evictions:\n  * The configuration of a **maxMemory** limit\n  * The presence of **network failures**\n\nThe **maxMemory** limit is a configurable threshold on the sum of the bytes which are directly held by the cache. For the current version this count is done only by summing the lengh of the byte[] values. Neither keys nor other system structure are counted, also be aware that the JVM will store Strings and Maps in a very complex way so it is not simple to compute the effective size in memory for this structures. Usually this overhead is much smaller than the actual data saved so BlazingCache ignores it.\n\nThe limit on memory is a **target** value, note that BlazingCache wil not prevent you to store an entry larger than the limit and the reason is very simple: the byte[] value that your Java code is going to store in the local cache has already been allocated to the JVM before the call to CacheClient.put() and so there is no reason for rejecting it.\nEviction caused by the maxMemory limit is done in background by the CacheClient object on a support thread (the same thread which handles maxMemory limit and reconnection in case of network failure) and it evicts entries in LRU (least recently used) order until the memory usage goes below the configured threshold.\n\nEvicted entries are not invalidated: they are only removed from the local JVM and they will continue to be present in other cache clients which previously stored a copy of them.\n\nIn presence of **network failure** the cache will evict **instantly** all the entries from the cache, this is because in absence of a connection to the CacheServer the CacheClient cannot guarantee the consistency of the data in the local cache.\nMost frequent cases of network failures in real world are the reboot of the CacheServer and the local Garbage Collector which slows down the JVM until the system cannot handle the network connection.\n\nSometimes, it may happen that an entry is cached locally in a client but no longer used.\nDespite that, the client is going to be notified forever about updates occurring in other clients: in large clusters, this may lead to overhead and speed issues. A possible solution is to adopt a CacheClient configuration property named **maxLocalEntryAge**, so as to define a local time-to-live applied to all the entries in the cache. \nThis property does not have to be confused with expireTime, a put's input parameter; indeed, while the latter denotes entry's expire date (regarded as a deadline), the former represents an actual time span: in other words, once entry gets older than maxLocalEntryAge, an invalidation is requested, with the aim of avoiding \"stale\" data.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"Locks\"\n}\n[/block]\nIf you really need to achieve strong consistency you will need to acquire **locks** in order to block any operation on a key, for example if you want to perform a Compare-And-Set operation (like the JSR107 replace operation).\n\nLocks are not \"reentrant\" with respect of the running thread.\nLock are always held by key, not by sets of keys.\n\nAfter creating a lock (a \"KeyLock\" object) you need to pass the lock reference to every operation (fetch, put,touch,invalidate). Locks are lost in case of client disconnection (if you are in trouble just restart the locking client or the server, which in turn will disconnect every client and then release all locks).\n\nPlease note that the 'get' operation is local only and locks do not apply, in that case you need to use 'fetch' in order to read data from the local cache but inside the lock.\n\nEvery method of CacheClient which does not take a \"lock\" parameter implicitly acquires a lock and release it when the operation is finished. This kind of implicit 'locks' are visible only server-side, the client cannot access the lock reference object as it is visible only on the server.\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"try (CacheClient client\\n                = CacheClientBuilder\\n                .newBuilder()                \\n                .maxMemory(1024 * 1024 * 1024)\\n                .build();) {\\nclient.start();\\nclient.waitForConnection(10000);\\nKeyLock lock = client.lock(\\\"foo\\\");\\n  try {\\n     byte[] data = client.fetch(\\\"foo\\\",lock).getSerializedDate();\\n     if (new String(data).equals(\\\"expectedvalue\\\") ) {\\n     client.put(\\\"foo\\\", \\\"bar\\\".getBytes(), System.currentTimeMillis() + 10000,lock);\\n     String result = new String(client.fetch(\\\"foo\\\",lock).getSerializedData());\\n      System.out.println(\\\"result:\\\" + result);\\n     }\\n  } finally {\\n    client.unlock(\\\"foo\\\");\\n  }\",\n      \"language\": \"java\"\n    }\n  ]\n}\n[/block]\n\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"getObject/fetchObject/putObject\"\n}\n[/block]\nYou can store simple POJOs on BlazingCache in order to avoid the need to serialize/deserialize them and save resources.\nThe key to this function are the functions **getObject**, **fetchObject** and **putObject**.\nWhen you issue a putObject a soft reference to the value is held in memory for further getObject/fetchObject lookups. A serialized version is sent to the Cache Server in order to update copies of other (remote) clients.\nIf the local GC evicts clear the SoftReference and getObject is called on the same key, the value is deserialized from the byte[] stored at low level from the CacheClient or fetched from another remote CacheClient.\n\nWorking this way if the local JVM has enough memory client code uses its own objects by reference, falling back to deserialization in case of low memory.\nMemory counters consider only the byte[] serialized version of the objects, in that the deserialized version would be GCed at any time.\n\n[block:code]\n{\n  \"codes\": [\n    {\n      \"code\": \"try (CacheClient client\\n                = CacheClientBuilder\\n                .newBuilder()                \\n                .maxMemory(1024 * 1024 * 1024)\\n                .build();) {\\nclient.start();\\nclient.waitForConnection(10000);\\nMyBean bar  = new MyBean();\\nclient.putOBject(\\\"foo\\\",bar);\\nMyBean bar2 = client.fetchObject(\\\"foo\\\");\\nassertSame(bar,bar2);\\n\\n\",\n      \"language\": \"java\"\n    }\n  ]\n}\n[/block]\n**Beware that your code MUST NOT mutate objects returned from getObject/fetchObject, because they are shared object.**\nIf your code is not \"safe\" and mutates entry from the cache you should deserialize the cached value, maybe using the CacheClient embedded EntrySerializer.\n[block:api-header]\n{\n  \"type\": \"basic\",\n  \"title\": \"URI Parameters\"\n}\n[/block]\nIn some JCache implementations, such as Hibernate, passing additional properties to the client is not feasible.\nBlazingCache, on the other hand, allows to set additional properties on the client, by appending parameters in URI querystring.","excerpt":"Using the BlazingCache Java Native Client","slug":"native-client","type":"basic","title":"Native Client"}

Native Client

Using the BlazingCache Java Native Client

The BlazingCache Java Native Client is the low level object which manages the connection to the Cache Server (coordinator). The Cache Client essentially manages a "near cache", that is a local copy of data that you have on another resource, for instance a database. The term "near" means that the data is stored inside the same JVM which executes the CacheClient object. In BlazingCache the Cache "Server" is only a coordinator for CacheClients, so there aren't "server" which hosts a copy of the cached data. In order to configure the CacheClient you only have to supply the parameters for the discovery of the CacheServer and for the authentication. Usually you will instance only a CacheClient per JVM. The CacheClient will do whatever it's possible to keep active a connection to the leader CacheServer. Only one TCP connection is needed: all the communication is asynchronous and the Client will multiplex all RPC calls on the shared TCP connection. The cache which is managed by the CacheClient is really "raw", it is in fact a Map<String,byte[]>, that is that 'keys' are always Java (unicode) strings and 'values' are always byte[]. There is no facility for Object serialization, you need to implement your own way to serialize your objects. If you need a more simple client just use the **JSR107 client** and you will get a generic Cache<K,T> which will handle serialization of keys and value and local "separation" between different Caches (it simply will prefix all your keys with the cache name, but from the client's point of view you will see several different caches). [block:api-header] { "type": "basic", "title": "Booting the Client" } [/block] In order to boot the CacheClient you are encouraged to use the ClientBuilder class. First add the following Maven dependency to your project (or download the Jar from the Maven Central) [block:code] { "codes": [ { "code": "<dependency>\n <groupId>org.blazingcache</groupId>\n <artifactId>blazingcache-core</artifactId>\n <version>VERSION</version>\n</dependency>", "language": "xml" } ] } [/block] Then you can use the following example [block:code] { "codes": [ { "code": "try (CacheClient client\n = CacheClientBuilder\n .newBuilder() \n .maxMemory(1024 * 1024 * 1024)\n .build();) {\nclient.start();\nclient.waitForConnection(10000);\nclient.put(\"foo\", \"bar\".getBytes(), System.currentTimeMillis() + 10000);\nString result = new String(client.fetch(\"foo\").getSerializedData());\nSystem.out.println(\"result:\" + result);\n}", "language": "java" } ] } [/block] This simple code starts an "embedded" CacheServer on your JVM and create a CacheClient. See the CacheClient configuration reference for available configuration options about cache properties, discovery and authentication. [block:api-header] { "type": "basic", "title": "Get vs Fetch" } [/block] You have two ways to read data from the cache: **get** and **fetch**. If you use "get" then you will read only data from the local cache. If you use "fetch" and the data is not present locally then the client will ask another client for the same entry: this operation will let you use the cache of other clients instead of reading from the original source (a database for instance). Sometimes it is better to use a "fetch" and sometimes to use a "get", it depends from your data and from your clients, there is no magic recipe. Beware that a cache is really volatile and the system can decide at any time to evict entries from the cache. [block:api-header] { "type": "basic", "title": "Fetch priority" } [/block] Fetch priority is the value used by the server in order to prioritize some client over others regarding fetch. More precisely, the server chooses the remote client by starting from client that have the highest priority value (in case a value is not set, 10 is adopted as a default). This is a very useful property in case of clients subject to long GC pauses that may causes very slow fetch responses and, in some cases, hangs on the distributed cache. If you need to remove a client from the list of clients available for remote fetches altogether you may set the fetch priority to 0 (disabled). [block:api-header] { "type": "basic", "title": "Put" } [/block] In order to populate the cache you have to issue the **put** command. The put command takes three arguments: the key (a String), the value to store (a byte[]) and an optional expire timestamp. Entries live in the cache for the specified time, if you need to change the expire timestamp you can use the **touch** command. If you leave the expire timestamp to 0 it means that the entry is immortal. Beware that the cache will refer *locally* directly to the byte[] value you passed so you will never change the content of this array, any change will lead to an unspecified behavior. At any time that the value reaches other JVM (for example by means of a fetch or a remote put) it is copied locally. In order to guarantee that data is always up-to-date when issuing "get" commands any time you issue a "put" command the new value is sent to the CacheServer which in turn notifies the new value to all the other clients which are known to have the same entry in their local cache. Note that the put will block until every client has acked the notification. Only clients which held the entry in memory will be notified, so if the entry is stored only by the local JVM the "put" command will return as the server acks the registration of the presence of the entry in the local JVM. In case of concurrent puts of different values the system will detect a **conflict** and then invalidate the entry. [block:api-header] { "type": "basic", "title": "Load" } [/block] **Load** allows to populate the cache in a way very similar to put: it differs in that no invalidation will be requested to other clients. This command allows a client to cache an entry with no need to update the other peers with the same entry: the entry is created or updated in the issuing client and it will be subject to potential invalidation whenever a put command is issued on a peer client with the same target entry as the entry loaded on the issuing client. [block:api-header] { "type": "basic", "title": "Put vs Load" } [/block] The main difference between put and load lies in usage of fetch priority **Put command does not consider fetch priority**: when the new value is inserted in the cache, every other client featuring the same entry is invalidated. In this way, in case of slow clients (e.g. clients with fetch priority equals to 0), the issuing client must wait for "slow" client's ack. In this scenario, the overall performance is determined by slow client speed, making the fetch priority useless. On the other hand, **load** overcomes the aforementioned shortcoming **avoiding the request of cache invalidation** on the other peer clients. [block:api-header] { "type": "basic", "title": "Invalidation" } [/block] When you update or delete data on your database or you detect that cache data has become "stale" you have to invalidate the cache entry which represents the data you changed. The operation is **invalidate** which will remove the cached value from the local JVM and will notify the CacheServer that the entry is no more valid. The CacheServer, similarly as for "puts", will in turn notify every other client which is known to have a copy of the entry. Note that the invalidate operation will block until every other client has acked the notification. Usually with BlazingCache you are going to partition data in the cache by using a prefix for each key, so there is a **invalidateByPrefix** special operation which will invalidate every entry whose key *startsWith* the given prefix. Beware that the invalidateByPrefix command will be broadcasted to every connected client, so this kind of operation needs to be acked by every client, not only by clients which hold a key which matches the prefix. [block:api-header] { "type": "basic", "title": "Expiration" } [/block] Each entry can have an expiration timestamp, which is an instant in time after which the entry should be automatically invalidated. Expiration is executed by the CacheServer which holds in memory the expiration timestamp for every entry which is not immortal. Note that every "put" and "touch" operation will register the new expiration timestamp to the CacheServer. Expiration operations are driven by the CacheServer in order to have a central point which is the reference for time in the distributed system. Expiration is an optional feature, if you do not set the expiration timestamp your entries will never expire. [block:api-header] { "type": "basic", "title": "Eviction" } [/block] Eviction is the process to remove entries from the local JVM mainly in order to free memory. In BlazingCache there are actually two reasons for evictions: * The configuration of a **maxMemory** limit * The presence of **network failures** The **maxMemory** limit is a configurable threshold on the sum of the bytes which are directly held by the cache. For the current version this count is done only by summing the lengh of the byte[] values. Neither keys nor other system structure are counted, also be aware that the JVM will store Strings and Maps in a very complex way so it is not simple to compute the effective size in memory for this structures. Usually this overhead is much smaller than the actual data saved so BlazingCache ignores it. The limit on memory is a **target** value, note that BlazingCache wil not prevent you to store an entry larger than the limit and the reason is very simple: the byte[] value that your Java code is going to store in the local cache has already been allocated to the JVM before the call to CacheClient.put() and so there is no reason for rejecting it. Eviction caused by the maxMemory limit is done in background by the CacheClient object on a support thread (the same thread which handles maxMemory limit and reconnection in case of network failure) and it evicts entries in LRU (least recently used) order until the memory usage goes below the configured threshold. Evicted entries are not invalidated: they are only removed from the local JVM and they will continue to be present in other cache clients which previously stored a copy of them. In presence of **network failure** the cache will evict **instantly** all the entries from the cache, this is because in absence of a connection to the CacheServer the CacheClient cannot guarantee the consistency of the data in the local cache. Most frequent cases of network failures in real world are the reboot of the CacheServer and the local Garbage Collector which slows down the JVM until the system cannot handle the network connection. Sometimes, it may happen that an entry is cached locally in a client but no longer used. Despite that, the client is going to be notified forever about updates occurring in other clients: in large clusters, this may lead to overhead and speed issues. A possible solution is to adopt a CacheClient configuration property named **maxLocalEntryAge**, so as to define a local time-to-live applied to all the entries in the cache. This property does not have to be confused with expireTime, a put's input parameter; indeed, while the latter denotes entry's expire date (regarded as a deadline), the former represents an actual time span: in other words, once entry gets older than maxLocalEntryAge, an invalidation is requested, with the aim of avoiding "stale" data. [block:api-header] { "type": "basic", "title": "Locks" } [/block] If you really need to achieve strong consistency you will need to acquire **locks** in order to block any operation on a key, for example if you want to perform a Compare-And-Set operation (like the JSR107 replace operation). Locks are not "reentrant" with respect of the running thread. Lock are always held by key, not by sets of keys. After creating a lock (a "KeyLock" object) you need to pass the lock reference to every operation (fetch, put,touch,invalidate). Locks are lost in case of client disconnection (if you are in trouble just restart the locking client or the server, which in turn will disconnect every client and then release all locks). Please note that the 'get' operation is local only and locks do not apply, in that case you need to use 'fetch' in order to read data from the local cache but inside the lock. Every method of CacheClient which does not take a "lock" parameter implicitly acquires a lock and release it when the operation is finished. This kind of implicit 'locks' are visible only server-side, the client cannot access the lock reference object as it is visible only on the server. [block:code] { "codes": [ { "code": "try (CacheClient client\n = CacheClientBuilder\n .newBuilder() \n .maxMemory(1024 * 1024 * 1024)\n .build();) {\nclient.start();\nclient.waitForConnection(10000);\nKeyLock lock = client.lock(\"foo\");\n try {\n byte[] data = client.fetch(\"foo\",lock).getSerializedDate();\n if (new String(data).equals(\"expectedvalue\") ) {\n client.put(\"foo\", \"bar\".getBytes(), System.currentTimeMillis() + 10000,lock);\n String result = new String(client.fetch(\"foo\",lock).getSerializedData());\n System.out.println(\"result:\" + result);\n }\n } finally {\n client.unlock(\"foo\");\n }", "language": "java" } ] } [/block] [block:api-header] { "type": "basic", "title": "getObject/fetchObject/putObject" } [/block] You can store simple POJOs on BlazingCache in order to avoid the need to serialize/deserialize them and save resources. The key to this function are the functions **getObject**, **fetchObject** and **putObject**. When you issue a putObject a soft reference to the value is held in memory for further getObject/fetchObject lookups. A serialized version is sent to the Cache Server in order to update copies of other (remote) clients. If the local GC evicts clear the SoftReference and getObject is called on the same key, the value is deserialized from the byte[] stored at low level from the CacheClient or fetched from another remote CacheClient. Working this way if the local JVM has enough memory client code uses its own objects by reference, falling back to deserialization in case of low memory. Memory counters consider only the byte[] serialized version of the objects, in that the deserialized version would be GCed at any time. [block:code] { "codes": [ { "code": "try (CacheClient client\n = CacheClientBuilder\n .newBuilder() \n .maxMemory(1024 * 1024 * 1024)\n .build();) {\nclient.start();\nclient.waitForConnection(10000);\nMyBean bar = new MyBean();\nclient.putOBject(\"foo\",bar);\nMyBean bar2 = client.fetchObject(\"foo\");\nassertSame(bar,bar2);\n\n", "language": "java" } ] } [/block] **Beware that your code MUST NOT mutate objects returned from getObject/fetchObject, because they are shared object.** If your code is not "safe" and mutates entry from the cache you should deserialize the cached value, maybe using the CacheClient embedded EntrySerializer. [block:api-header] { "type": "basic", "title": "URI Parameters" } [/block] In some JCache implementations, such as Hibernate, passing additional properties to the client is not feasible. BlazingCache, on the other hand, allows to set additional properties on the client, by appending parameters in URI querystring.