[Glass] rest api returning large result

Otto Behrens otto at finworks.biz
Mon May 20 07:21:33 PDT 2024


Thanks for your reply, James.


> May I recommend https://github.com/jgfoster/WebGS ? With this I was able
> to read a 64 MB extent from the file system, send it over an HTTP
> connection, and write it to the file system in 0.164 seconds total.
>

Does WebGS come with its own HTTP(s) server? And are you reverse proxying
with another server in front?


> If the data were already in GemStone I’m sure it would take much less
> time. Let me know if you have questions!
>

I bet the WebGS framework is more optimal than Seaside; do you have a Rest
API framework as well?


>
> James Foster
>
> On May 20, 2024, at 12:48 AM, Otto Behrens via Glass <
> glass at lists.gemtalksystems.com> wrote:
>
> We have not managed to fix this yet. What is your opinion on the following
> ideas?
>
> 1. Otto, you are an idiot. Why would you be sending a 70MB json response
> out on a REST API? This is not how you do an API. [Otto: that may well be
> true. How should I then get the data across to the user? Is there someone
> that can help me with a solution?]
> 2. Otto, you have not kept up to date with things and you are the only one
> in the whole world that's using WAGsZincAdaptor serving as an nginx
> upstream. WTF. [Otto: Yes, we are here on the bottom tip of Africa where
> the internet is slow and we read even slower, sorry about that. Please help
> me with some sites, documents and any other material so that I can start
> reading.]
> 3. Otto, have you heard of the idea of compression? You should know that
> JSON will compress to at least a 10th of the original size because it is
> text with a lot of repetition. [Otto: yes, I downloaded a zip file once and
> could not read it in vim. Is this what I should do: compress the connection
> between nginx and the Zinc adaptor? Or should I send the json back as a
> compressed zip file?]
> 4. Otto, you should get to know nginx and its settings and understand all
> the stuff nginx spits out when debug logging is on. Better still, download
> the C source code; you should still be able to after only Smalltalking for
> 20 years. [Otto: Are you super human? Have you seen all of that? Please
> enlighten me as this will take me years.]
>
> Of course I missed some ideas. Please feel free to add them to the list.
>
> Otto Behrens
> +27 82 809 2375
> [image: FINWorks]
> [image: FINWorks] <http://za.linkedin.com/in/waltherbehrens>
> www.finworks.biz
>
> Disclaimer & Confidentiality Note: This email is intended solely for the
> use of the individual or entity named above as it may contain information
> that is confidential and privileged. If you are not the intended recipient,
> be advised that any dissemination, distribution or copying of this email is
> strictly prohibited. FINWorks cannot be held liable by any person other
> than the addressee in respect of any opinions, conclusions, advice or other
> information contained in this email.
>
>
> On Fri, May 17, 2024 at 7:09 AM Otto Behrens <otto at finworks.biz> wrote:
>
>> Hi,
>>
>> We are running into a performance problem where our API returns about
>> 70MB json content. We run a nginx web server which connects to
>> a WAGsZincAdaptor that we start in a topaz session. Do you perhaps have the
>> same kind of setup and can you please give me some advice on this?
>>
>> We found that converting objects to json (using Object >> asJson from
>> Seaside-JSON-Core) was not performing great, but was eating loads of memory
>> because of WABuilder >> render:. This is not the issue and we improved this
>> a bit (by eliminating String streamContents: and streaming more directly).
>>
>> The problem seems to be that after producing the json content,
>> transmitting the response takes a long time.
>>
>> As an experiment, I read a 16MB file from disk and returned that as the
>> result of an API call to eliminate all json producing code. I used curl as
>> a client on the same machine as the nginx server, stone and the topaz
>> session and it takes 26 seconds. This eliminates most overhead (no network
>> latency).
>>
>> The stack below is what I see most of the time:
>>
>> 1  SocketStream >> nextPutAll:  @natCode+0x4d  [GsNMethod 169113089]
>>               FP: 0x7f2c0fee9930=StackLimit[-218] , callerFP:
>> StackLimit[-212]
>>   arg 1:0x7f2bee7f0de0 (cls:103425 ByteArray size:16384)
>>   rcvr: 0x7f2bff68f670 (cls:144280577 SocketStream size:12)
>> 2  ZnBivalentWriteStream >> next:putAll:startingAt:  @natCode+0x2cf
>>  [GsNMethod 158727169]
>>               FP: 0x7f2c0fee9960=StackLimit[-212] , callerFP:
>> StackLimit[-202]
>>   arg 3:69337098 (SmallInteger 8667137)
>>   arg 2:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd
>> (size 16627226))'
>>   arg 1:131074 (SmallInteger 16384)
>>   rcvr: 0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2)
>> 3  ZnUtils class >> nextPutAll:on:  @natCode+0x421  [GsNMethod 175369473]
>>               FP: 0x7f2c0fee99b0=StackLimit[-202] , callerFP:
>> StackLimit[-196]
>>   arg 2:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2)
>>   arg 1:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd
>> (size 16627226))'
>>   rcvr: 0x7f2c0c335750 oid:143053313 (cls:143054593 ZnUtils class size:19)
>> 4  ZnByteArrayEntity >> writeOn:  @natCode+0xdb  [GsNMethod 269993473]
>>               FP: 0x7f2c0fee99e0=StackLimit[-196] , callerFP:
>> StackLimit[-186]
>>   arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2)
>>   rcvr: 0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3)
>> 5  ZnEntityWriter >> writeEntity:  @natCode+0x382  [GsNMethod 269988609]
>>               FP: 0x7f2c0fee9a30=StackLimit[-186] , callerFP:
>> StackLimit[-180]
>>   arg 1:0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3)
>>   rcvr: 0x7f2c00675398 (cls:145876737 ZnEntityWriter size:2)
>> 6  ZnMessage >> writeOn:  @natCode+0x295  [GsNMethod 158696193]
>>               FP: 0x7f2c0fee9a60=StackLimit[-180] , callerFP:
>> StackLimit[-174]
>>   arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2)
>>   rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3)
>> 7  ZnResponse >> writeOn:  @natCode+0x1f0  [GsNMethod 155024025857]
>>               FP: 0x7f2c0fee9a90=StackLimit[-174] , callerFP:
>> StackLimit[-169]
>>   arg 1:0x7f2bff68f670 (cls:144280577 SocketStream size:12)
>>   rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3)
>> 8  ZnSingleThreadedServer >> writeResponse:on:  @natCode+0xa3  [GsNMethod
>> 169204737]
>>               FP: 0x7f2c0fee9ab8=StackLimit[-169] , callerFP:
>> StackLimit[-162]
>>   arg 2:0x7f2bff68f670 (cls:144280577 SocketStream size:12)
>>   arg 1:0x7f2c0064fe20 (cls:145901313 ZnResponse size:3)
>>   rcvr: 0x7f2bff5de528 oid:4763064833 (cls:144532225
>> ZnManagingMultiThreadedServer size:9)
>>
>> Kind regards
>> Otto Behrens
>> +27 82 809 2375
>> [image: FINWorks]
>> [image: FINWorks] <http://za.linkedin.com/in/waltherbehrens>
>> www.finworks.biz
>>
>> Disclaimer & Confidentiality Note: This email is intended solely for the
>> use of the individual or entity named above as it may contain information
>> that is confidential and privileged. If you are not the intended recipient,
>> be advised that any dissemination, distribution or copying of this email is
>> strictly prohibited. FINWorks cannot be held liable by any person other
>> than the addressee in respect of any opinions, conclusions, advice or other
>> information contained in this email.
>>
> _______________________________________________
> Glass mailing list
> Glass at lists.gemtalksystems.com
> https://lists.gemtalksystems.com/mailman/listinfo/glass
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/archives/glass/attachments/20240520/4ad8f306/attachment.htm>


More information about the Glass mailing list