large json data size retrieved from apstrata run script
  • Hi Karim,

    I am using Android SDK to run a "RunScript" to get json data from apstrata:

    I getSignedUrl then I run an httpPost request.

    response = client.getSignedRequestUrl("RunScript", parameters, null, AuthMode.SIMPLE);
    response += "&apsdb.store=DefaultStore&apsdb.scriptName=" + scriptname  + "&apsws.responseType=json&command="+command;

    HttpPost httpPost = new HttpPost(response);
    httpPost.setHeader("Content-Type", "application/json");
    response2 = httpClient.execute(httpPost);
    InputStream responseStream = response2.getEntity().getContent();

    then I parse the json..
    BufferedReader br = new BufferedReader(new InputStreamReader(is));
    /*BufferedReader br = new BufferedReader(
     new InputStreamReader(
       new BoundedInputStream(is, 12048)
     )
    );*/
    Log.v("BufferedReader", "" + br);
    String line, result = "";
    String eol = System.getProperty("line.separator");
    while ((line = br.readLine()) != null){
    result += line+eol;

    The problem that json data retrieved is really big in size that causing memory heap 

    09-09 13:00:08.104: D/dalvikvm(17975): GC_FOR_ALLOC freed 5599K, 10% free 53019K/58820K, paused 38ms, total 49ms
    09-09 13:00:08.154: D/dalvikvm(17975): GC_FOR_ALLOC freed 1396K, 11% free 52474K/58820K, paused 22ms, total 22ms
    09-09 13:00:08.184: D/dalvikvm(17975): GC_FOR_ALLOC freed 672K, 11% free 52475K/58820K, paused 29ms, total 29ms
    09-09 13:00:08.204: D/dalvikvm(17975): GC_FOR_ALLOC freed 662K, 11% free 52492K/58820K, paused 23ms, total 24ms
    09-09 13:00:08.244: D/dalvikvm(17975): GC_FOR_ALLOC freed 697K, 11% free 52477K/58820K, paused 30ms, total 30ms
    09-09 13:00:08.264: D/dalvikvm(17975): GC_FOR_ALLOC freed 647K, 11% free 52504K/58820K, paused 22ms, total 22ms
    09-09 13:00:08.284: D/dalvikvm(17975): GC_FOR_ALLOC freed 715K, 11% free 52478K/58820K, paused 22ms, total 22ms
    09-09 13:00:08.324: D/dalvikvm(17975): GC_FOR_ALLOC freed 648K, 11% free 52498K/58820K, paused 33ms, total 37ms
    09-09 13:00:08.364: D/dalvikvm(17975): GC_FOR_ALLOC freed 635K, 11% free 52549K/58820K, paused 22ms, total 22ms
    09-09 13:00:08.394: D/dalvikvm(17975): GC_FOR_ALLOC freed 707K, 11% free 52541K/58820K, paused 21ms, total 21ms
    09-09 13:00:08.434: D/dalvikvm(17975): GC_FOR_ALLOC freed 690K, 11% free 52553K/58820K, paused 35ms, total 35ms
    09-09 13:00:08.454: D/dalvikvm(17975): GC_FOR_ALLOC freed 705K, 11% free 52544K/58820K, paused 22ms, total 22ms
    09-09 13:00:08.484: D/dalvikvm(17975): GC_FOR_ALLOC freed 689K, 11% free 52545K/58820K, paused 25ms, total 25ms
    09-09 13:00:08.504: D/dalvikvm(17975): GC_FOR_ALLOC freed 677K, 11% free 52546K/58820K, paused 26ms, total 26ms
    09-09 13:00:08.534: D/dalvikvm(17975): GC_FOR_ALLOC freed 682K, 11% free 52561K/58820K, paused 21ms, total 21ms

    and this causing a delay.

    my question are you zipping apstrata response on your servers ? is there a way to decrease the size of json retreived.

    Thank you.
  • Hi,

    Yes, we do zip large responses. However, I do not think that this would help in your case, since the response would be unzipped on your side and you would end up with the same problem.

    Looking at your logs, it seems to me that the garbage collector is not able to free any memory, which normally indicates that there is a memory leak somewhere.

    For now, I recommend resorting to your favorite profiling tool - you can user jvisualvm.exe (JDK_Patg/bin) if you do not have any preference. It should allow you to spot the long lived objects and those that are consuming memory.

    Keep me posted.

    Karim
  • hi Karim,

    Thank you for the reply, I will check for the profiling but let me clarify that once I decrease the returned fields let's say I limit it to 10 rows returned in apstrata , the Garbage collector decreases, so would we try to zip this response on your server side and see what happens since I am getting 50 records with lot of json fields included in the response which I guess is causing problems.

    Nour
    Thank you.
  • Hi Nour,

    I understand your concern and, as mentioned in my precedeing post, I believe that zipping the response would not resolve it as you still have to unzip it on the client side.

    Note that you can specify how many records to return using the "apsdb.resultsPerPage" parameter in your Apstrata request. By default the value is set to 50 but you can decrease it to 10 if this allows your app to function properly.

    Keep me posted.

    Karim.

  • Hi Karim,

    I tried to get the length of response retrieved from apstrata ,but I am getting -1 

    response2.getEntity().getContentLength  = -1 

    How could I find the length of data retrieved from your server?

    I need to know if the data exceeds 1MB or not, this is important so I can take decision if I limit or decrease nb of rows.

    Thank you. 
  • Hi,

    We do not send the content-length. One option for you to know the size of the content, is to stringify the JSON you receive and get its length. However, this operation is costly so I recommend doing this on the server. 

    If you are directly invoking an Apstrata API (e.g. Query) from the mobile app, it is anyway a good practice to wrap it with a server-side script that will invoke the API itself (and invoke the script from the mobile app).

    The script will thus be able to calculate the length of the rows and return it, either as a parameter or as part of the response header. In that latter case, you might be interested in checking the httpRespond object.

    Karim.

Howdy, Stranger!

It looks like you're new here. If you want to get involved, click one of these buttons!