From otto at finworks.biz Thu May 16 22:09:34 2024 From: otto at finworks.biz (Otto Behrens) Date: Fri, 17 May 2024 07:09:34 +0200 Subject: [Glass] rest api returning large result Message-ID: Hi, We are running into a performance problem where our API returns about 70MB json content. We run a nginx web server which connects to a WAGsZincAdaptor that we start in a topaz session. Do you perhaps have the same kind of setup and can you please give me some advice on this? We found that converting objects to json (using Object >> asJson from Seaside-JSON-Core) was not performing great, but was eating loads of memory because of WABuilder >> render:. This is not the issue and we improved this a bit (by eliminating String streamContents: and streaming more directly). The problem seems to be that after producing the json content, transmitting the response takes a long time. As an experiment, I read a 16MB file from disk and returned that as the result of an API call to eliminate all json producing code. I used curl as a client on the same machine as the nginx server, stone and the topaz session and it takes 26 seconds. This eliminates most overhead (no network latency). The stack below is what I see most of the time: 1 SocketStream >> nextPutAll: @natCode+0x4d [GsNMethod 169113089] FP: 0x7f2c0fee9930=StackLimit[-218] , callerFP: StackLimit[-212] arg 1:0x7f2bee7f0de0 (cls:103425 ByteArray size:16384) rcvr: 0x7f2bff68f670 (cls:144280577 SocketStream size:12) 2 ZnBivalentWriteStream >> next:putAll:startingAt: @natCode+0x2cf [GsNMethod 158727169] FP: 0x7f2c0fee9960=StackLimit[-212] , callerFP: StackLimit[-202] arg 3:69337098 (SmallInteger 8667137) arg 2:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd (size 16627226))' arg 1:131074 (SmallInteger 16384) rcvr: 0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) 3 ZnUtils class >> nextPutAll:on: @natCode+0x421 [GsNMethod 175369473] FP: 0x7f2c0fee99b0=StackLimit[-202] , callerFP: StackLimit[-196] arg 2:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) arg 1:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd (size 16627226))' rcvr: 0x7f2c0c335750 oid:143053313 (cls:143054593 ZnUtils class size:19) 4 ZnByteArrayEntity >> writeOn: @natCode+0xdb [GsNMethod 269993473] FP: 0x7f2c0fee99e0=StackLimit[-196] , callerFP: StackLimit[-186] arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) rcvr: 0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) 5 ZnEntityWriter >> writeEntity: @natCode+0x382 [GsNMethod 269988609] FP: 0x7f2c0fee9a30=StackLimit[-186] , callerFP: StackLimit[-180] arg 1:0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) rcvr: 0x7f2c00675398 (cls:145876737 ZnEntityWriter size:2) 6 ZnMessage >> writeOn: @natCode+0x295 [GsNMethod 158696193] FP: 0x7f2c0fee9a60=StackLimit[-180] , callerFP: StackLimit[-174] arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) 7 ZnResponse >> writeOn: @natCode+0x1f0 [GsNMethod 155024025857] FP: 0x7f2c0fee9a90=StackLimit[-174] , callerFP: StackLimit[-169] arg 1:0x7f2bff68f670 (cls:144280577 SocketStream size:12) rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) 8 ZnSingleThreadedServer >> writeResponse:on: @natCode+0xa3 [GsNMethod 169204737] FP: 0x7f2c0fee9ab8=StackLimit[-169] , callerFP: StackLimit[-162] arg 2:0x7f2bff68f670 (cls:144280577 SocketStream size:12) arg 1:0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) rcvr: 0x7f2bff5de528 oid:4763064833 (cls:144532225 ZnManagingMultiThreadedServer size:9) Kind regards Otto Behrens +27 82 809 2375 [image: FINWorks] [image: FINWorks] www.finworks.biz Disclaimer & Confidentiality Note: This email is intended solely for the use of the individual or entity named above as it may contain information that is confidential and privileged. If you are not the intended recipient, be advised that any dissemination, distribution or copying of this email is strictly prohibited. FINWorks cannot be held liable by any person other than the addressee in respect of any opinions, conclusions, advice or other information contained in this email. -------------- next part -------------- An HTML attachment was scrubbed... URL: From otto at finworks.biz Mon May 20 00:48:57 2024 From: otto at finworks.biz (Otto Behrens) Date: Mon, 20 May 2024 09:48:57 +0200 Subject: [Glass] rest api returning large result In-Reply-To: References: Message-ID: We have not managed to fix this yet. What is your opinion on the following ideas? 1. Otto, you are an idiot. Why would you be sending a 70MB json response out on a REST API? This is not how you do an API. [Otto: that may well be true. How should I then get the data across to the user? Is there someone that can help me with a solution?] 2. Otto, you have not kept up to date with things and you are the only one in the whole world that's using WAGsZincAdaptor serving as an nginx upstream. WTF. [Otto: Yes, we are here on the bottom tip of Africa where the internet is slow and we read even slower, sorry about that. Please help me with some sites, documents and any other material so that I can start reading.] 3. Otto, have you heard of the idea of compression? You should know that JSON will compress to at least a 10th of the original size because it is text with a lot of repetition. [Otto: yes, I downloaded a zip file once and could not read it in vim. Is this what I should do: compress the connection between nginx and the Zinc adaptor? Or should I send the json back as a compressed zip file?] 4. Otto, you should get to know nginx and its settings and understand all the stuff nginx spits out when debug logging is on. Better still, download the C source code; you should still be able to after only Smalltalking for 20 years. [Otto: Are you super human? Have you seen all of that? Please enlighten me as this will take me years.] Of course I missed some ideas. Please feel free to add them to the list. Otto Behrens +27 82 809 2375 [image: FINWorks] [image: FINWorks] www.finworks.biz Disclaimer & Confidentiality Note: This email is intended solely for the use of the individual or entity named above as it may contain information that is confidential and privileged. If you are not the intended recipient, be advised that any dissemination, distribution or copying of this email is strictly prohibited. FINWorks cannot be held liable by any person other than the addressee in respect of any opinions, conclusions, advice or other information contained in this email. On Fri, May 17, 2024 at 7:09?AM Otto Behrens wrote: > Hi, > > We are running into a performance problem where our API returns about 70MB > json content. We run a nginx web server which connects to a WAGsZincAdaptor > that we start in a topaz session. Do you perhaps have the same kind of > setup and can you please give me some advice on this? > > We found that converting objects to json (using Object >> asJson from > Seaside-JSON-Core) was not performing great, but was eating loads of memory > because of WABuilder >> render:. This is not the issue and we improved this > a bit (by eliminating String streamContents: and streaming more directly). > > The problem seems to be that after producing the json content, > transmitting the response takes a long time. > > As an experiment, I read a 16MB file from disk and returned that as the > result of an API call to eliminate all json producing code. I used curl as > a client on the same machine as the nginx server, stone and the topaz > session and it takes 26 seconds. This eliminates most overhead (no network > latency). > > The stack below is what I see most of the time: > > 1 SocketStream >> nextPutAll: @natCode+0x4d [GsNMethod 169113089] > FP: 0x7f2c0fee9930=StackLimit[-218] , callerFP: > StackLimit[-212] > arg 1:0x7f2bee7f0de0 (cls:103425 ByteArray size:16384) > rcvr: 0x7f2bff68f670 (cls:144280577 SocketStream size:12) > 2 ZnBivalentWriteStream >> next:putAll:startingAt: @natCode+0x2cf > [GsNMethod 158727169] > FP: 0x7f2c0fee9960=StackLimit[-212] , callerFP: > StackLimit[-202] > arg 3:69337098 (SmallInteger 8667137) > arg 2:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd > (size 16627226))' > arg 1:131074 (SmallInteger 16384) > rcvr: 0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) > 3 ZnUtils class >> nextPutAll:on: @natCode+0x421 [GsNMethod 175369473] > FP: 0x7f2c0fee99b0=StackLimit[-202] , callerFP: > StackLimit[-196] > arg 2:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) > arg 1:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd > (size 16627226))' > rcvr: 0x7f2c0c335750 oid:143053313 (cls:143054593 ZnUtils class size:19) > 4 ZnByteArrayEntity >> writeOn: @natCode+0xdb [GsNMethod 269993473] > FP: 0x7f2c0fee99e0=StackLimit[-196] , callerFP: > StackLimit[-186] > arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) > rcvr: 0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) > 5 ZnEntityWriter >> writeEntity: @natCode+0x382 [GsNMethod 269988609] > FP: 0x7f2c0fee9a30=StackLimit[-186] , callerFP: > StackLimit[-180] > arg 1:0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) > rcvr: 0x7f2c00675398 (cls:145876737 ZnEntityWriter size:2) > 6 ZnMessage >> writeOn: @natCode+0x295 [GsNMethod 158696193] > FP: 0x7f2c0fee9a60=StackLimit[-180] , callerFP: > StackLimit[-174] > arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) > rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) > 7 ZnResponse >> writeOn: @natCode+0x1f0 [GsNMethod 155024025857] > FP: 0x7f2c0fee9a90=StackLimit[-174] , callerFP: > StackLimit[-169] > arg 1:0x7f2bff68f670 (cls:144280577 SocketStream size:12) > rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) > 8 ZnSingleThreadedServer >> writeResponse:on: @natCode+0xa3 [GsNMethod > 169204737] > FP: 0x7f2c0fee9ab8=StackLimit[-169] , callerFP: > StackLimit[-162] > arg 2:0x7f2bff68f670 (cls:144280577 SocketStream size:12) > arg 1:0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) > rcvr: 0x7f2bff5de528 oid:4763064833 (cls:144532225 > ZnManagingMultiThreadedServer size:9) > > Kind regards > > Otto Behrens > > +27 82 809 2375 > [image: FINWorks] > [image: FINWorks] > www.finworks.biz > > Disclaimer & Confidentiality Note: This email is intended solely for the > use of the individual or entity named above as it may contain information > that is confidential and privileged. If you are not the intended recipient, > be advised that any dissemination, distribution or copying of this email is > strictly prohibited. FINWorks cannot be held liable by any person other > than the addressee in respect of any opinions, conclusions, advice or other > information contained in this email. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralph.mauersberger at gmx.net Mon May 20 03:27:01 2024 From: ralph.mauersberger at gmx.net (Ralph Mauersberger) Date: Mon, 20 May 2024 12:27:01 +0200 Subject: [Glass] rest api returning large result In-Reply-To: References: Message-ID: Hello Otto, I'm not using Seaside, but here my two cents: 70MB is quite large, but should work fine if the relevant settings are adjusted (increased GEM memory, allowed nginx response size). I agree, that 26 seconds for 16MB is a very long time. On an anstract level, these are my thoughts: * Depending on your API it might be an good idea to think about pagination and split the data into some more api requests with smaller responses. * I would configure nginx to do the response compression (gzip) to the client. * I would repeat your local curl test with a connect to your Zinc-server port just to take nginx out of the equation. * If nginx is not causing the trouble, I would try to use GemStone's ProfMonitor to get some more insights about the run time behaviour of your smalltalk code. Best regards, Ralph Am 20.05.2024 um 09:48 schrieb Otto Behrens via Glass: > We have not managed to fix this yet. What is your opinion on the > following ideas? > > 1. Otto, you are an idiot. Why would you be sending a 70MB json > response out on a REST API? This is not how you do an API. [Otto: that > may well be true. How should I then get the data across to the user? > Is there someone that can help me with a solution?] > 2. Otto, you have not kept up to date with things and you are the only > one in the whole world that's using WAGsZincAdaptor serving as an > nginx upstream. WTF. [Otto: Yes, we are here on the bottom tip of > Africa where the internet is slow and we read even slower, sorry about > that. Please help me with some sites, documents and any other material > so that I can start reading.] > 3. Otto, have you heard of the idea of compression? You should know > that JSON will compress to at least a 10th of the original size > because it is text with a lot of repetition. [Otto: yes, I downloaded > a zip file once and could not read it in vim. Is this what I should > do: compress the connection between nginx and the Zinc adaptor? Or > should I send the json back as a compressed zip file?] > 4. Otto, you should get to know nginx and its settings and understand > all the stuff nginx spits out when debug logging is on. Better still, > download the C source code; you should still be able to after only > Smalltalking for 20 years. [Otto: Are you super human? Have you seen > all of that? Please enlighten me as this will take me years.] > > Of course I missed some ideas. Please feel free to add them to the list. > > Otto Behrens > > +27 82 809 2375 > > FINWorks > > > > > FINWorks www.finworks.biz > > > > Disclaimer & Confidentiality Note: This email is intended solely for > the use of the individual or entity named above as it may contain > information that is confidential and privileged. If you are not the > intended recipient, be advised that any dissemination, distribution or > copying of this email is strictly prohibited. FINWorks cannot be held > liable by any person other than the addressee in respect of any > opinions, conclusions, advice or other information contained in this > email. > > > > On Fri, May 17, 2024 at 7:09?AM Otto Behrens wrote: > > Hi, > > We are running into a performance problem where our API returns > about 70MB json content. We run a nginx web server which connects > to a?WAGsZincAdaptor that we start in a topaz session. Do you > perhaps have the same kind of setup and can you please give me > some advice?on this? > > We found that converting objects to json (using Object >> asJson > from Seaside-JSON-Core) was not performing?great, but was eating > loads of memory because of WABuilder >> render:. This is not the > issue and we improved this a bit (by eliminating String > streamContents: and streaming more directly). > > The problem seems to be that after producing the json content, > transmitting the response takes a long time. > > As an experiment, I read a 16MB file from disk and returned that > as the result of an API call to eliminate all json producing code. > I used curl as a client on the same machine as the nginx server, > stone and the topaz session and it takes 26 seconds. This > eliminates most overhead (no network latency). > > The stack below is what I see most of the time: > > 1 ?SocketStream >> nextPutAll: ?@natCode+0x4d ?[GsNMethod 169113089] > ? ? ? ? ? ? ? FP: 0x7f2c0fee9930=StackLimit[-218] , callerFP: > StackLimit[-212] > ? arg 1:0x7f2bee7f0de0 (cls:103425 ByteArray size:16384) > ? rcvr: 0x7f2bff68f670 (cls:144280577 SocketStream size:12) > 2 ?ZnBivalentWriteStream >> next:putAll:startingAt: > ?@natCode+0x2cf ?[GsNMethod 158727169] > ? ? ? ? ? ? ? FP: 0x7f2c0fee9960=StackLimit[-212] , callerFP: > StackLimit[-202] > ? arg 3:69337098 (SmallInteger 8667137) > ? arg 2:0x7f2c0064fe50 (cls:74753 String > size:16627226)'(large_or_fwd (size 16627226))' > ? arg 1:131074 (SmallInteger 16384) > ? rcvr: 0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) > 3 ?ZnUtils class >> nextPutAll:on: ?@natCode+0x421 ?[GsNMethod > 175369473] > ? ? ? ? ? ? ? FP: 0x7f2c0fee99b0=StackLimit[-202] , callerFP: > StackLimit[-196] > ? arg 2:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) > ? arg 1:0x7f2c0064fe50 (cls:74753 String > size:16627226)'(large_or_fwd (size 16627226))' > ? rcvr: 0x7f2c0c335750 oid:143053313 (cls:143054593 ZnUtils class > size:19) > 4 ?ZnByteArrayEntity >> writeOn: ?@natCode+0xdb ?[GsNMethod 269993473] > ? ? ? ? ? ? ? FP: 0x7f2c0fee99e0=StackLimit[-196] , callerFP: > StackLimit[-186] > ? arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) > ? rcvr: 0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) > 5 ?ZnEntityWriter >> writeEntity: ?@natCode+0x382 ?[GsNMethod > 269988609] > ? ? ? ? ? ? ? FP: 0x7f2c0fee9a30=StackLimit[-186] , callerFP: > StackLimit[-180] > ? arg 1:0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) > ? rcvr: 0x7f2c00675398 (cls:145876737 ZnEntityWriter size:2) > 6 ?ZnMessage >> writeOn: ?@natCode+0x295 ?[GsNMethod 158696193] > ? ? ? ? ? ? ? FP: 0x7f2c0fee9a60=StackLimit[-180] , callerFP: > StackLimit[-174] > ? arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) > ? rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) > 7 ?ZnResponse >> writeOn: ?@natCode+0x1f0 ?[GsNMethod 155024025857] > ? ? ? ? ? ? ? FP: 0x7f2c0fee9a90=StackLimit[-174] , callerFP: > StackLimit[-169] > ? arg 1:0x7f2bff68f670 (cls:144280577 SocketStream size:12) > ? rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) > 8 ?ZnSingleThreadedServer >> writeResponse:on: ?@natCode+0xa3 > ?[GsNMethod 169204737] > ? ? ? ? ? ? ? FP: 0x7f2c0fee9ab8=StackLimit[-169] , callerFP: > StackLimit[-162] > ? arg 2:0x7f2bff68f670 (cls:144280577 SocketStream size:12) > ? arg 1:0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) > ? rcvr: 0x7f2bff5de528 oid:4763064833 (cls:144532225 > ZnManagingMultiThreadedServer size:9) > > Kind regards > > Otto Behrens > > +27 82 809 2375 > > FINWorks > > > > > FINWorks > www.finworks.biz > > > Disclaimer & Confidentiality Note: This email is intended solely > for the use of the individual or entity named above as it may > contain information that is confidential and privileged. If you > are not the intended recipient, be advised that any dissemination, > distribution or copying of this email is strictly prohibited. > FINWorks cannot be held liable by any person other than the > addressee in respect of any opinions, conclusions, advice or other > information contained in this email. > > > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass -------------- next part -------------- An HTML attachment was scrubbed... URL: From Smalltalk at JGFoster.net Mon May 20 05:52:47 2024 From: Smalltalk at JGFoster.net (James Foster) Date: Mon, 20 May 2024 05:52:47 -0700 Subject: [Glass] rest api returning large result In-Reply-To: References: Message-ID: <66A1CEC3-EDAA-4557-8F6D-B155A82B41C2@JGFoster.net> Otto, May I recommend https://github.com/jgfoster/WebGS ? With this I was able to read a 64 MB extent from the file system, send it over an HTTP connection, and write it to the file system in 0.164 seconds total. time curl http://127.0.0.1:8888/extent0.dbf --output extent0.dbf % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 64.0M 100 64.0M 0 0 452M 0 --:--:-- --:--:-- --:--:-- 453M curl http://127.0.0.1:8888/extent0.dbf --output extent0.dbf 0.01s user 0.08s system 53% cpu 0.164 total A second attempt, this time writing to /dev/null took 0.079 seconds total. time curl http://127.0.0.1:8888/extent0.dbf --output /dev/null % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 64.0M 100 64.0M 0 0 987M 0 --:--:-- --:--:-- --:--:-- 1000M curl http://127.0.0.1:8888/extent0.dbf --output /dev/null 0.01s user 0.03s system 43% cpu 0.079 total If the data were already in GemStone I?m sure it would take much less time. Let me know if you have questions! James Foster > On May 20, 2024, at 12:48?AM, Otto Behrens via Glass wrote: > > We have not managed to fix this yet. What is your opinion on the following ideas? > > 1. Otto, you are an idiot. Why would you be sending a 70MB json response out on a REST API? This is not how you do an API. [Otto: that may well be true. How should I then get the data across to the user? Is there someone that can help me with a solution?] > 2. Otto, you have not kept up to date with things and you are the only one in the whole world that's using WAGsZincAdaptor serving as an nginx upstream. WTF. [Otto: Yes, we are here on the bottom tip of Africa where the internet is slow and we read even slower, sorry about that. Please help me with some sites, documents and any other material so that I can start reading.] > 3. Otto, have you heard of the idea of compression? You should know that JSON will compress to at least a 10th of the original size because it is text with a lot of repetition. [Otto: yes, I downloaded a zip file once and could not read it in vim. Is this what I should do: compress the connection between nginx and the Zinc adaptor? Or should I send the json back as a compressed zip file?] > 4. Otto, you should get to know nginx and its settings and understand all the stuff nginx spits out when debug logging is on. Better still, download the C source code; you should still be able to after only Smalltalking for 20 years. [Otto: Are you super human? Have you seen all of that? Please enlighten me as this will take me years.] > > Of course I missed some ideas. Please feel free to add them to the list. > > Otto Behrens > +27 82 809 2375 > > www.finworks.biz > Disclaimer & Confidentiality Note: This email is intended solely for the use of the individual or entity named above as it may contain information that is confidential and privileged. If you are not the intended recipient, be advised that any dissemination, distribution or copying of this email is strictly prohibited. FINWorks cannot be held liable by any person other than the addressee in respect of any opinions, conclusions, advice or other information contained in this email. > > > > On Fri, May 17, 2024 at 7:09?AM Otto Behrens > wrote: >> Hi, >> >> We are running into a performance problem where our API returns about 70MB json content. We run a nginx web server which connects to a WAGsZincAdaptor that we start in a topaz session. Do you perhaps have the same kind of setup and can you please give me some advice on this? >> >> We found that converting objects to json (using Object >> asJson from Seaside-JSON-Core) was not performing great, but was eating loads of memory because of WABuilder >> render:. This is not the issue and we improved this a bit (by eliminating String streamContents: and streaming more directly). >> >> The problem seems to be that after producing the json content, transmitting the response takes a long time. >> >> As an experiment, I read a 16MB file from disk and returned that as the result of an API call to eliminate all json producing code. I used curl as a client on the same machine as the nginx server, stone and the topaz session and it takes 26 seconds. This eliminates most overhead (no network latency). >> >> The stack below is what I see most of the time: >> >> 1 SocketStream >> nextPutAll: @natCode+0x4d [GsNMethod 169113089] >> FP: 0x7f2c0fee9930=StackLimit[-218] , callerFP: StackLimit[-212] >> arg 1:0x7f2bee7f0de0 (cls:103425 ByteArray size:16384) >> rcvr: 0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> 2 ZnBivalentWriteStream >> next:putAll:startingAt: @natCode+0x2cf [GsNMethod 158727169] >> FP: 0x7f2c0fee9960=StackLimit[-212] , callerFP: StackLimit[-202] >> arg 3:69337098 (SmallInteger 8667137) >> arg 2:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd (size 16627226))' >> arg 1:131074 (SmallInteger 16384) >> rcvr: 0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> 3 ZnUtils class >> nextPutAll:on: @natCode+0x421 [GsNMethod 175369473] >> FP: 0x7f2c0fee99b0=StackLimit[-202] , callerFP: StackLimit[-196] >> arg 2:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> arg 1:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd (size 16627226))' >> rcvr: 0x7f2c0c335750 oid:143053313 (cls:143054593 ZnUtils class size:19) >> 4 ZnByteArrayEntity >> writeOn: @natCode+0xdb [GsNMethod 269993473] >> FP: 0x7f2c0fee99e0=StackLimit[-196] , callerFP: StackLimit[-186] >> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> rcvr: 0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >> 5 ZnEntityWriter >> writeEntity: @natCode+0x382 [GsNMethod 269988609] >> FP: 0x7f2c0fee9a30=StackLimit[-186] , callerFP: StackLimit[-180] >> arg 1:0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >> rcvr: 0x7f2c00675398 (cls:145876737 ZnEntityWriter size:2) >> 6 ZnMessage >> writeOn: @natCode+0x295 [GsNMethod 158696193] >> FP: 0x7f2c0fee9a60=StackLimit[-180] , callerFP: StackLimit[-174] >> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> 7 ZnResponse >> writeOn: @natCode+0x1f0 [GsNMethod 155024025857] >> FP: 0x7f2c0fee9a90=StackLimit[-174] , callerFP: StackLimit[-169] >> arg 1:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> 8 ZnSingleThreadedServer >> writeResponse:on: @natCode+0xa3 [GsNMethod 169204737] >> FP: 0x7f2c0fee9ab8=StackLimit[-169] , callerFP: StackLimit[-162] >> arg 2:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> arg 1:0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> rcvr: 0x7f2bff5de528 oid:4763064833 (cls:144532225 ZnManagingMultiThreadedServer size:9) >> >> Kind regards >> Otto Behrens >> +27 82 809 2375 >> >> www.finworks.biz >> Disclaimer & Confidentiality Note: This email is intended solely for the use of the individual or entity named above as it may contain information that is confidential and privileged. If you are not the intended recipient, be advised that any dissemination, distribution or copying of this email is strictly prohibited. FINWorks cannot be held liable by any person other than the addressee in respect of any opinions, conclusions, advice or other information contained in this email. >> > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan at yesplan.be Mon May 20 07:02:39 2024 From: johan at yesplan.be (Johan Brichau) Date: Mon, 20 May 2024 16:02:39 +0200 Subject: [Glass] rest api returning large result In-Reply-To: References: Message-ID: Hi Otto, Do you have a code snippet that isolates this slow performance with only using the Seaside framework so that I can reproduce the problem? I?m using FastCGI in production, and serving large json files as well. Did not see this performance issue pop up though. As you mentioned, in your api endpoints, it?s better to stream directly to the response stream rather than use the builder. Looking at the code? I see there is actually no example of a JSON rest api? I will add that while I?m trying to reproduce your issue? Johan > On 20 May 2024, at 09:48, Otto Behrens via Glass wrote: > > We have not managed to fix this yet. What is your opinion on the following ideas? > > 1. Otto, you are an idiot. Why would you be sending a 70MB json response out on a REST API? This is not how you do an API. [Otto: that may well be true. How should I then get the data across to the user? Is there someone that can help me with a solution?] > 2. Otto, you have not kept up to date with things and you are the only one in the whole world that's using WAGsZincAdaptor serving as an nginx upstream. WTF. [Otto: Yes, we are here on the bottom tip of Africa where the internet is slow and we read even slower, sorry about that. Please help me with some sites, documents and any other material so that I can start reading.] > 3. Otto, have you heard of the idea of compression? You should know that JSON will compress to at least a 10th of the original size because it is text with a lot of repetition. [Otto: yes, I downloaded a zip file once and could not read it in vim. Is this what I should do: compress the connection between nginx and the Zinc adaptor? Or should I send the json back as a compressed zip file?] > 4. Otto, you should get to know nginx and its settings and understand all the stuff nginx spits out when debug logging is on. Better still, download the C source code; you should still be able to after only Smalltalking for 20 years. [Otto: Are you super human? Have you seen all of that? Please enlighten me as this will take me years.] > > Of course I missed some ideas. Please feel free to add them to the list. > > Otto Behrens > +27 82 809 2375 > > www.finworks.biz > Disclaimer & Confidentiality Note: This email is intended solely for the use of the individual or entity named above as it may contain information that is confidential and privileged. If you are not the intended recipient, be advised that any dissemination, distribution or copying of this email is strictly prohibited. FINWorks cannot be held liable by any person other than the addressee in respect of any opinions, conclusions, advice or other information contained in this email. > > > > On Fri, May 17, 2024 at 7:09?AM Otto Behrens > wrote: >> Hi, >> >> We are running into a performance problem where our API returns about 70MB json content. We run a nginx web server which connects to a WAGsZincAdaptor that we start in a topaz session. Do you perhaps have the same kind of setup and can you please give me some advice on this? >> >> We found that converting objects to json (using Object >> asJson from Seaside-JSON-Core) was not performing great, but was eating loads of memory because of WABuilder >> render:. This is not the issue and we improved this a bit (by eliminating String streamContents: and streaming more directly). >> >> The problem seems to be that after producing the json content, transmitting the response takes a long time. >> >> As an experiment, I read a 16MB file from disk and returned that as the result of an API call to eliminate all json producing code. I used curl as a client on the same machine as the nginx server, stone and the topaz session and it takes 26 seconds. This eliminates most overhead (no network latency). >> >> The stack below is what I see most of the time: >> >> 1 SocketStream >> nextPutAll: @natCode+0x4d [GsNMethod 169113089] >> FP: 0x7f2c0fee9930=StackLimit[-218] , callerFP: StackLimit[-212] >> arg 1:0x7f2bee7f0de0 (cls:103425 ByteArray size:16384) >> rcvr: 0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> 2 ZnBivalentWriteStream >> next:putAll:startingAt: @natCode+0x2cf [GsNMethod 158727169] >> FP: 0x7f2c0fee9960=StackLimit[-212] , callerFP: StackLimit[-202] >> arg 3:69337098 (SmallInteger 8667137) >> arg 2:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd (size 16627226))' >> arg 1:131074 (SmallInteger 16384) >> rcvr: 0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> 3 ZnUtils class >> nextPutAll:on: @natCode+0x421 [GsNMethod 175369473] >> FP: 0x7f2c0fee99b0=StackLimit[-202] , callerFP: StackLimit[-196] >> arg 2:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> arg 1:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd (size 16627226))' >> rcvr: 0x7f2c0c335750 oid:143053313 (cls:143054593 ZnUtils class size:19) >> 4 ZnByteArrayEntity >> writeOn: @natCode+0xdb [GsNMethod 269993473] >> FP: 0x7f2c0fee99e0=StackLimit[-196] , callerFP: StackLimit[-186] >> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> rcvr: 0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >> 5 ZnEntityWriter >> writeEntity: @natCode+0x382 [GsNMethod 269988609] >> FP: 0x7f2c0fee9a30=StackLimit[-186] , callerFP: StackLimit[-180] >> arg 1:0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >> rcvr: 0x7f2c00675398 (cls:145876737 ZnEntityWriter size:2) >> 6 ZnMessage >> writeOn: @natCode+0x295 [GsNMethod 158696193] >> FP: 0x7f2c0fee9a60=StackLimit[-180] , callerFP: StackLimit[-174] >> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> 7 ZnResponse >> writeOn: @natCode+0x1f0 [GsNMethod 155024025857] >> FP: 0x7f2c0fee9a90=StackLimit[-174] , callerFP: StackLimit[-169] >> arg 1:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> 8 ZnSingleThreadedServer >> writeResponse:on: @natCode+0xa3 [GsNMethod 169204737] >> FP: 0x7f2c0fee9ab8=StackLimit[-169] , callerFP: StackLimit[-162] >> arg 2:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> arg 1:0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> rcvr: 0x7f2bff5de528 oid:4763064833 (cls:144532225 ZnManagingMultiThreadedServer size:9) >> >> Kind regards >> Otto Behrens >> +27 82 809 2375 >> >> www.finworks.biz >> Disclaimer & Confidentiality Note: This email is intended solely for the use of the individual or entity named above as it may contain information that is confidential and privileged. If you are not the intended recipient, be advised that any dissemination, distribution or copying of this email is strictly prohibited. FINWorks cannot be held liable by any person other than the addressee in respect of any opinions, conclusions, advice or other information contained in this email. >> > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass -------------- next part -------------- An HTML attachment was scrubbed... URL: From otto at finworks.biz Mon May 20 07:18:35 2024 From: otto at finworks.biz (Otto Behrens) Date: Mon, 20 May 2024 16:18:35 +0200 Subject: [Glass] rest api returning large result In-Reply-To: References: Message-ID: Ralph, thanks a lot for your response. > 70MB is quite large, but should work fine if the relevant settings are > adjusted (increased GEM memory, allowed nginx response size). I agree, that > 26 seconds for 16MB is a very long time. > Yes, we ran out of temporary object space and increased that. We also changed the Rest API code a bit to avoid unnecessary buffer copying. It is not streaming properly yet, but survives with the memory that we have allocated for now. > On an anstract level, these are my thoughts: > > * Depending on your API it might be an good idea to think about pagination > and split the data into some more api requests with smaller responses. > Ok, that may help. How big should a page be? Is 10MB too much? > * I would configure nginx to do the response compression (gzip) to the > client. > We did that, but it made no significant difference. The problem is on the server side. > * I would repeat your local curl test with a connect to your Zinc-server > port just to take nginx out of the equation. > That is a great idea. Will do that. We are using client certificates, but for a test I can hard code the fingerprint. > * If nginx is not causing the trouble, I would try to use GemStone's > ProfMonitor to get some more insights about the run time behaviour of your > smalltalk code. > The smalltalk code is reasonably expensive, but my tests (using kill -USR1) revealed that it was mostly spending time in SocketStream >> nextPutAll: > > Best regards, > Ralph > > > Am 20.05.2024 um 09:48 schrieb Otto Behrens via Glass: > > We have not managed to fix this yet. What is your opinion on the following > ideas? > > 1. Otto, you are an idiot. Why would you be sending a 70MB json response > out on a REST API? This is not how you do an API. [Otto: that may well be > true. How should I then get the data across to the user? Is there someone > that can help me with a solution?] > 2. Otto, you have not kept up to date with things and you are the only one > in the whole world that's using WAGsZincAdaptor serving as an nginx > upstream. WTF. [Otto: Yes, we are here on the bottom tip of Africa where > the internet is slow and we read even slower, sorry about that. Please help > me with some sites, documents and any other material so that I can start > reading.] > 3. Otto, have you heard of the idea of compression? You should know that > JSON will compress to at least a 10th of the original size because it is > text with a lot of repetition. [Otto: yes, I downloaded a zip file once and > could not read it in vim. Is this what I should do: compress the connection > between nginx and the Zinc adaptor? Or should I send the json back as a > compressed zip file?] > 4. Otto, you should get to know nginx and its settings and understand all > the stuff nginx spits out when debug logging is on. Better still, download > the C source code; you should still be able to after only Smalltalking for > 20 years. [Otto: Are you super human? Have you seen all of that? Please > enlighten me as this will take me years.] > > Of course I missed some ideas. Please feel free to add them to the list. > > Otto Behrens > > +27 82 809 2375 > [image: FINWorks] > > > > [image: FINWorks] > www.finworks.biz > > Disclaimer & Confidentiality Note: This email is intended solely for the > use of the individual or entity named above as it may contain information > that is confidential and privileged. If you are not the intended recipient, > be advised that any dissemination, distribution or copying of this email is > strictly prohibited. FINWorks cannot be held liable by any person other > than the addressee in respect of any opinions, conclusions, advice or other > information contained in this email. > > > On Fri, May 17, 2024 at 7:09?AM Otto Behrens wrote: > >> Hi, >> >> We are running into a performance problem where our API returns about >> 70MB json content. We run a nginx web server which connects to >> a WAGsZincAdaptor that we start in a topaz session. Do you perhaps have the >> same kind of setup and can you please give me some advice on this? >> >> We found that converting objects to json (using Object >> asJson from >> Seaside-JSON-Core) was not performing great, but was eating loads of memory >> because of WABuilder >> render:. This is not the issue and we improved this >> a bit (by eliminating String streamContents: and streaming more directly). >> >> The problem seems to be that after producing the json content, >> transmitting the response takes a long time. >> >> As an experiment, I read a 16MB file from disk and returned that as the >> result of an API call to eliminate all json producing code. I used curl as >> a client on the same machine as the nginx server, stone and the topaz >> session and it takes 26 seconds. This eliminates most overhead (no network >> latency). >> >> The stack below is what I see most of the time: >> >> 1 SocketStream >> nextPutAll: @natCode+0x4d [GsNMethod 169113089] >> FP: 0x7f2c0fee9930=StackLimit[-218] , callerFP: >> StackLimit[-212] >> arg 1:0x7f2bee7f0de0 (cls:103425 ByteArray size:16384) >> rcvr: 0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> 2 ZnBivalentWriteStream >> next:putAll:startingAt: @natCode+0x2cf >> [GsNMethod 158727169] >> FP: 0x7f2c0fee9960=StackLimit[-212] , callerFP: >> StackLimit[-202] >> arg 3:69337098 (SmallInteger 8667137) >> arg 2:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd >> (size 16627226))' >> arg 1:131074 (SmallInteger 16384) >> rcvr: 0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> 3 ZnUtils class >> nextPutAll:on: @natCode+0x421 [GsNMethod 175369473] >> FP: 0x7f2c0fee99b0=StackLimit[-202] , callerFP: >> StackLimit[-196] >> arg 2:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> arg 1:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd >> (size 16627226))' >> rcvr: 0x7f2c0c335750 oid:143053313 (cls:143054593 ZnUtils class size:19) >> 4 ZnByteArrayEntity >> writeOn: @natCode+0xdb [GsNMethod 269993473] >> FP: 0x7f2c0fee99e0=StackLimit[-196] , callerFP: >> StackLimit[-186] >> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> rcvr: 0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >> 5 ZnEntityWriter >> writeEntity: @natCode+0x382 [GsNMethod 269988609] >> FP: 0x7f2c0fee9a30=StackLimit[-186] , callerFP: >> StackLimit[-180] >> arg 1:0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >> rcvr: 0x7f2c00675398 (cls:145876737 ZnEntityWriter size:2) >> 6 ZnMessage >> writeOn: @natCode+0x295 [GsNMethod 158696193] >> FP: 0x7f2c0fee9a60=StackLimit[-180] , callerFP: >> StackLimit[-174] >> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> 7 ZnResponse >> writeOn: @natCode+0x1f0 [GsNMethod 155024025857] >> FP: 0x7f2c0fee9a90=StackLimit[-174] , callerFP: >> StackLimit[-169] >> arg 1:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> 8 ZnSingleThreadedServer >> writeResponse:on: @natCode+0xa3 [GsNMethod >> 169204737] >> FP: 0x7f2c0fee9ab8=StackLimit[-169] , callerFP: >> StackLimit[-162] >> arg 2:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> arg 1:0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> rcvr: 0x7f2bff5de528 oid:4763064833 (cls:144532225 >> ZnManagingMultiThreadedServer size:9) >> >> Kind regards >> >> Otto Behrens >> >> +27 82 809 2375 >> [image: FINWorks] >> >> >> >> [image: FINWorks] >> www.finworks.biz >> >> Disclaimer & Confidentiality Note: This email is intended solely for the >> use of the individual or entity named above as it may contain information >> that is confidential and privileged. If you are not the intended recipient, >> be advised that any dissemination, distribution or copying of this email is >> strictly prohibited. FINWorks cannot be held liable by any person other >> than the addressee in respect of any opinions, conclusions, advice or other >> information contained in this email. >> > > _______________________________________________ > Glass mailing listGlass at lists.gemtalksystems.comhttps://lists.gemtalksystems.com/mailman/listinfo/glass > > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > -------------- next part -------------- An HTML attachment was scrubbed... URL: From otto at finworks.biz Mon May 20 07:21:33 2024 From: otto at finworks.biz (Otto Behrens) Date: Mon, 20 May 2024 16:21:33 +0200 Subject: [Glass] rest api returning large result In-Reply-To: <66A1CEC3-EDAA-4557-8F6D-B155A82B41C2@JGFoster.net> References: <66A1CEC3-EDAA-4557-8F6D-B155A82B41C2@JGFoster.net> Message-ID: Thanks for your reply, James. > May I recommend https://github.com/jgfoster/WebGS ? With this I was able > to read a 64 MB extent from the file system, send it over an HTTP > connection, and write it to the file system in 0.164 seconds total. > Does WebGS come with its own HTTP(s) server? And are you reverse proxying with another server in front? > If the data were already in GemStone I?m sure it would take much less > time. Let me know if you have questions! > I bet the WebGS framework is more optimal than Seaside; do you have a Rest API framework as well? > > James Foster > > On May 20, 2024, at 12:48?AM, Otto Behrens via Glass < > glass at lists.gemtalksystems.com> wrote: > > We have not managed to fix this yet. What is your opinion on the following > ideas? > > 1. Otto, you are an idiot. Why would you be sending a 70MB json response > out on a REST API? This is not how you do an API. [Otto: that may well be > true. How should I then get the data across to the user? Is there someone > that can help me with a solution?] > 2. Otto, you have not kept up to date with things and you are the only one > in the whole world that's using WAGsZincAdaptor serving as an nginx > upstream. WTF. [Otto: Yes, we are here on the bottom tip of Africa where > the internet is slow and we read even slower, sorry about that. Please help > me with some sites, documents and any other material so that I can start > reading.] > 3. Otto, have you heard of the idea of compression? You should know that > JSON will compress to at least a 10th of the original size because it is > text with a lot of repetition. [Otto: yes, I downloaded a zip file once and > could not read it in vim. Is this what I should do: compress the connection > between nginx and the Zinc adaptor? Or should I send the json back as a > compressed zip file?] > 4. Otto, you should get to know nginx and its settings and understand all > the stuff nginx spits out when debug logging is on. Better still, download > the C source code; you should still be able to after only Smalltalking for > 20 years. [Otto: Are you super human? Have you seen all of that? Please > enlighten me as this will take me years.] > > Of course I missed some ideas. Please feel free to add them to the list. > > Otto Behrens > +27 82 809 2375 > [image: FINWorks] > [image: FINWorks] > www.finworks.biz > > Disclaimer & Confidentiality Note: This email is intended solely for the > use of the individual or entity named above as it may contain information > that is confidential and privileged. If you are not the intended recipient, > be advised that any dissemination, distribution or copying of this email is > strictly prohibited. FINWorks cannot be held liable by any person other > than the addressee in respect of any opinions, conclusions, advice or other > information contained in this email. > > > On Fri, May 17, 2024 at 7:09?AM Otto Behrens wrote: > >> Hi, >> >> We are running into a performance problem where our API returns about >> 70MB json content. We run a nginx web server which connects to >> a WAGsZincAdaptor that we start in a topaz session. Do you perhaps have the >> same kind of setup and can you please give me some advice on this? >> >> We found that converting objects to json (using Object >> asJson from >> Seaside-JSON-Core) was not performing great, but was eating loads of memory >> because of WABuilder >> render:. This is not the issue and we improved this >> a bit (by eliminating String streamContents: and streaming more directly). >> >> The problem seems to be that after producing the json content, >> transmitting the response takes a long time. >> >> As an experiment, I read a 16MB file from disk and returned that as the >> result of an API call to eliminate all json producing code. I used curl as >> a client on the same machine as the nginx server, stone and the topaz >> session and it takes 26 seconds. This eliminates most overhead (no network >> latency). >> >> The stack below is what I see most of the time: >> >> 1 SocketStream >> nextPutAll: @natCode+0x4d [GsNMethod 169113089] >> FP: 0x7f2c0fee9930=StackLimit[-218] , callerFP: >> StackLimit[-212] >> arg 1:0x7f2bee7f0de0 (cls:103425 ByteArray size:16384) >> rcvr: 0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> 2 ZnBivalentWriteStream >> next:putAll:startingAt: @natCode+0x2cf >> [GsNMethod 158727169] >> FP: 0x7f2c0fee9960=StackLimit[-212] , callerFP: >> StackLimit[-202] >> arg 3:69337098 (SmallInteger 8667137) >> arg 2:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd >> (size 16627226))' >> arg 1:131074 (SmallInteger 16384) >> rcvr: 0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> 3 ZnUtils class >> nextPutAll:on: @natCode+0x421 [GsNMethod 175369473] >> FP: 0x7f2c0fee99b0=StackLimit[-202] , callerFP: >> StackLimit[-196] >> arg 2:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> arg 1:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd >> (size 16627226))' >> rcvr: 0x7f2c0c335750 oid:143053313 (cls:143054593 ZnUtils class size:19) >> 4 ZnByteArrayEntity >> writeOn: @natCode+0xdb [GsNMethod 269993473] >> FP: 0x7f2c0fee99e0=StackLimit[-196] , callerFP: >> StackLimit[-186] >> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> rcvr: 0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >> 5 ZnEntityWriter >> writeEntity: @natCode+0x382 [GsNMethod 269988609] >> FP: 0x7f2c0fee9a30=StackLimit[-186] , callerFP: >> StackLimit[-180] >> arg 1:0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >> rcvr: 0x7f2c00675398 (cls:145876737 ZnEntityWriter size:2) >> 6 ZnMessage >> writeOn: @natCode+0x295 [GsNMethod 158696193] >> FP: 0x7f2c0fee9a60=StackLimit[-180] , callerFP: >> StackLimit[-174] >> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> 7 ZnResponse >> writeOn: @natCode+0x1f0 [GsNMethod 155024025857] >> FP: 0x7f2c0fee9a90=StackLimit[-174] , callerFP: >> StackLimit[-169] >> arg 1:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> 8 ZnSingleThreadedServer >> writeResponse:on: @natCode+0xa3 [GsNMethod >> 169204737] >> FP: 0x7f2c0fee9ab8=StackLimit[-169] , callerFP: >> StackLimit[-162] >> arg 2:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> arg 1:0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> rcvr: 0x7f2bff5de528 oid:4763064833 (cls:144532225 >> ZnManagingMultiThreadedServer size:9) >> >> Kind regards >> Otto Behrens >> +27 82 809 2375 >> [image: FINWorks] >> [image: FINWorks] >> www.finworks.biz >> >> Disclaimer & Confidentiality Note: This email is intended solely for the >> use of the individual or entity named above as it may contain information >> that is confidential and privileged. If you are not the intended recipient, >> be advised that any dissemination, distribution or copying of this email is >> strictly prohibited. FINWorks cannot be held liable by any person other >> than the addressee in respect of any opinions, conclusions, advice or other >> information contained in this email. >> > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Smalltalk at JGFoster.net Mon May 20 07:27:04 2024 From: Smalltalk at JGFoster.net (James Foster) Date: Mon, 20 May 2024 07:27:04 -0700 Subject: [Glass] rest api returning large result In-Reply-To: References: <66A1CEC3-EDAA-4557-8F6D-B155A82B41C2@JGFoster.net> Message-ID: See below... > On May 20, 2024, at 7:21?AM, Otto Behrens wrote: > > Thanks for your reply, James. > >> May I recommend https://github.com/jgfoster/WebGS ? With this I was able to read a 64 MB extent from the file system, send it over an HTTP connection, and write it to the file system in 0.164 seconds total. > > Does WebGS come with its own HTTP(s) server? And are you reverse proxying with another server in front? Yes, it has an HTTP(S) server built-in and can be proxied if you wish. > If the data were already in GemStone I?m sure it would take much less time. Let me know if you have questions! > > I bet the WebGS framework is more optimal than Seaside; do you have a Rest API framework as well? Yes, that is its primary function; see the sample page and code. James -------------- next part -------------- An HTML attachment was scrubbed... URL: From otto at finworks.biz Mon May 20 07:48:04 2024 From: otto at finworks.biz (Otto Behrens) Date: Mon, 20 May 2024 16:48:04 +0200 Subject: [Glass] rest api returning large result In-Reply-To: References: Message-ID: Thanks, Johan. We basically hacked a responder to do something like: requestContext respond: [ :response | response status: 200; contentType: 'application/json'. response stream nextPutAll: fileContents ] where we read the fileContents from a file I?m using FastCGI in production, and serving large json files as well. Did > not see this performance issue pop up though. > O dear, we did use FastCGI many moons ago and ended up reverting to an HTTP proxy. It was a bit easier to work with as HTTP is more readable, but I just remember it was a bit of a battle. > As you mentioned, in your api endpoints, it?s better to stream directly to > the response stream rather than use the builder. > Looking at the code? I see there is actually no example of a JSON rest > api? I will add that while I?m trying to reproduce your issue? > Yes, we use WABuilder >> render: We are avoiding the "String streamContents: " bit by passing the stream directly: WAJsonCanvas builder render: [ :json | self buildContent jsonOn: json ] onStream: aWAResponse stream where #render:onStream is the contents of the code in render: > > Johan > > On 20 May 2024, at 09:48, Otto Behrens via Glass < > glass at lists.gemtalksystems.com> wrote: > > We have not managed to fix this yet. What is your opinion on the following > ideas? > > 1. Otto, you are an idiot. Why would you be sending a 70MB json response > out on a REST API? This is not how you do an API. [Otto: that may well be > true. How should I then get the data across to the user? Is there someone > that can help me with a solution?] > 2. Otto, you have not kept up to date with things and you are the only one > in the whole world that's using WAGsZincAdaptor serving as an nginx > upstream. WTF. [Otto: Yes, we are here on the bottom tip of Africa where > the internet is slow and we read even slower, sorry about that. Please help > me with some sites, documents and any other material so that I can start > reading.] > 3. Otto, have you heard of the idea of compression? You should know that > JSON will compress to at least a 10th of the original size because it is > text with a lot of repetition. [Otto: yes, I downloaded a zip file once and > could not read it in vim. Is this what I should do: compress the connection > between nginx and the Zinc adaptor? Or should I send the json back as a > compressed zip file?] > 4. Otto, you should get to know nginx and its settings and understand all > the stuff nginx spits out when debug logging is on. Better still, download > the C source code; you should still be able to after only Smalltalking for > 20 years. [Otto: Are you super human? Have you seen all of that? Please > enlighten me as this will take me years.] > > Of course I missed some ideas. Please feel free to add them to the list. > > Otto Behrens > +27 82 809 2375 > [image: FINWorks] > [image: FINWorks] > www.finworks.biz > > Disclaimer & Confidentiality Note: This email is intended solely for the > use of the individual or entity named above as it may contain information > that is confidential and privileged. If you are not the intended recipient, > be advised that any dissemination, distribution or copying of this email is > strictly prohibited. FINWorks cannot be held liable by any person other > than the addressee in respect of any opinions, conclusions, advice or other > information contained in this email. > > > On Fri, May 17, 2024 at 7:09?AM Otto Behrens wrote: > >> Hi, >> >> We are running into a performance problem where our API returns about >> 70MB json content. We run a nginx web server which connects to >> a WAGsZincAdaptor that we start in a topaz session. Do you perhaps have the >> same kind of setup and can you please give me some advice on this? >> >> We found that converting objects to json (using Object >> asJson from >> Seaside-JSON-Core) was not performing great, but was eating loads of memory >> because of WABuilder >> render:. This is not the issue and we improved this >> a bit (by eliminating String streamContents: and streaming more directly). >> >> The problem seems to be that after producing the json content, >> transmitting the response takes a long time. >> >> As an experiment, I read a 16MB file from disk and returned that as the >> result of an API call to eliminate all json producing code. I used curl as >> a client on the same machine as the nginx server, stone and the topaz >> session and it takes 26 seconds. This eliminates most overhead (no network >> latency). >> >> The stack below is what I see most of the time: >> >> 1 SocketStream >> nextPutAll: @natCode+0x4d [GsNMethod 169113089] >> FP: 0x7f2c0fee9930=StackLimit[-218] , callerFP: >> StackLimit[-212] >> arg 1:0x7f2bee7f0de0 (cls:103425 ByteArray size:16384) >> rcvr: 0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> 2 ZnBivalentWriteStream >> next:putAll:startingAt: @natCode+0x2cf >> [GsNMethod 158727169] >> FP: 0x7f2c0fee9960=StackLimit[-212] , callerFP: >> StackLimit[-202] >> arg 3:69337098 (SmallInteger 8667137) >> arg 2:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd >> (size 16627226))' >> arg 1:131074 (SmallInteger 16384) >> rcvr: 0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> 3 ZnUtils class >> nextPutAll:on: @natCode+0x421 [GsNMethod 175369473] >> FP: 0x7f2c0fee99b0=StackLimit[-202] , callerFP: >> StackLimit[-196] >> arg 2:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> arg 1:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd >> (size 16627226))' >> rcvr: 0x7f2c0c335750 oid:143053313 (cls:143054593 ZnUtils class size:19) >> 4 ZnByteArrayEntity >> writeOn: @natCode+0xdb [GsNMethod 269993473] >> FP: 0x7f2c0fee99e0=StackLimit[-196] , callerFP: >> StackLimit[-186] >> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> rcvr: 0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >> 5 ZnEntityWriter >> writeEntity: @natCode+0x382 [GsNMethod 269988609] >> FP: 0x7f2c0fee9a30=StackLimit[-186] , callerFP: >> StackLimit[-180] >> arg 1:0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >> rcvr: 0x7f2c00675398 (cls:145876737 ZnEntityWriter size:2) >> 6 ZnMessage >> writeOn: @natCode+0x295 [GsNMethod 158696193] >> FP: 0x7f2c0fee9a60=StackLimit[-180] , callerFP: >> StackLimit[-174] >> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> 7 ZnResponse >> writeOn: @natCode+0x1f0 [GsNMethod 155024025857] >> FP: 0x7f2c0fee9a90=StackLimit[-174] , callerFP: >> StackLimit[-169] >> arg 1:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> 8 ZnSingleThreadedServer >> writeResponse:on: @natCode+0xa3 [GsNMethod >> 169204737] >> FP: 0x7f2c0fee9ab8=StackLimit[-169] , callerFP: >> StackLimit[-162] >> arg 2:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >> arg 1:0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >> rcvr: 0x7f2bff5de528 oid:4763064833 (cls:144532225 >> ZnManagingMultiThreadedServer size:9) >> >> Kind regards >> Otto Behrens >> +27 82 809 2375 >> [image: FINWorks] >> [image: FINWorks] >> www.finworks.biz >> >> Disclaimer & Confidentiality Note: This email is intended solely for the >> use of the individual or entity named above as it may contain information >> that is confidential and privileged. If you are not the intended recipient, >> be advised that any dissemination, distribution or copying of this email is >> strictly prohibited. FINWorks cannot be held liable by any person other >> than the addressee in respect of any opinions, conclusions, advice or other >> information contained in this email. >> > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > > > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan at yesplan.be Mon May 20 08:00:21 2024 From: johan at yesplan.be (Johan Brichau) Date: Mon, 20 May 2024 17:00:21 +0200 Subject: [Glass] rest api returning large result In-Reply-To: References: Message-ID: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> > On 20 May 2024, at 16:48, Otto Behrens wrote: > > Thanks, Johan. > > We basically hacked a responder to do something like: > > requestContext > respond: [ :response | > response > status: 200; > contentType: 'application/json'. > response stream nextPutAll: fileContents ] Okay, I?ll give that a try and see what I can get when using the Zinc adaptor in GemStone? Btw, if you have the file on disk, consider using X-Sendfile protocol to nginx. Something like this, where ?document url? is the url where it is reachable through nginx: self requestContext respond: [ :response | response headerAt: 'X-Accel-Redirect' put: document url ] >> I?m using FastCGI in production, and serving large json files as well. Did not see this performance issue pop up though. > > O dear, we did use FastCGI many moons ago and ended up reverting to an HTTP proxy. It was a bit easier to work with as HTTP is more readable, but I just remember it was a bit of a battle. I would not recommend it anymore in the sense that the protocol itself is outdated and prohibits things like websockets. But I mentioned it to say that the performance issue might very well be in the Zinc Adaptor for GemStone. -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.sargent at gemtalksystems.com Mon May 20 09:45:43 2024 From: richard.sargent at gemtalksystems.com (Richard Sargent) Date: Mon, 20 May 2024 09:45:43 -0700 Subject: [Glass] rest api returning large result In-Reply-To: References: Message-ID: See below. On Mon, May 20, 2024 at 7:18?AM Otto Behrens via Glass < glass at lists.gemtalksystems.com> wrote: > Ralph, thanks a lot for your response. > >> 70MB is quite large, but should work fine if the relevant settings are >> adjusted (increased GEM memory, allowed nginx response size). I agree, that >> 26 seconds for 16MB is a very long time. >> > > Yes, we ran out of temporary object space and increased that. We also > changed the Rest API code a bit to avoid unnecessary buffer copying. It is > not streaming properly yet, but survives with the memory that we have > allocated for now. > >> On an anstract level, these are my thoughts: >> >> * Depending on your API it might be an good idea to think about >> pagination and split the data into some more api requests with smaller >> responses. >> > Ok, that may help. How big should a page be? Is 10MB too much? > >> * I would configure nginx to do the response compression (gzip) to the >> client. >> > We did that, but it made no significant difference. The problem is on the > server side. > >> * I would repeat your local curl test with a connect to your Zinc-server >> port just to take nginx out of the equation. >> > That is a great idea. Will do that. We are using client certificates, but > for a test I can hard code the fingerprint. > > >> * If nginx is not causing the trouble, I would try to use GemStone's >> ProfMonitor to get some more insights about the run time behaviour of your >> smalltalk code. >> > The smalltalk code is reasonably expensive, but my tests (using kill > -USR1) revealed that it was mostly spending time in SocketStream >> > nextPutAll: > I know there were changes in recent versions of GemStone to provide better stream support for the Seaside use cases. I don't recall the details, but there is at least an AppendStream optimized for string building with primitives for some methods, I think. I just checked: "recent" is actually version 3.4! Do you see that class in the profile stacks or some other Stream class? >> Best regards, >> Ralph >> >> >> Am 20.05.2024 um 09:48 schrieb Otto Behrens via Glass: >> >> We have not managed to fix this yet. What is your opinion on the >> following ideas? >> >> 1. Otto, you are an idiot. Why would you be sending a 70MB json response >> out on a REST API? This is not how you do an API. [Otto: that may well be >> true. How should I then get the data across to the user? Is there someone >> that can help me with a solution?] >> 2. Otto, you have not kept up to date with things and you are the only >> one in the whole world that's using WAGsZincAdaptor serving as an nginx >> upstream. WTF. [Otto: Yes, we are here on the bottom tip of Africa where >> the internet is slow and we read even slower, sorry about that. Please help >> me with some sites, documents and any other material so that I can start >> reading.] >> 3. Otto, have you heard of the idea of compression? You should know that >> JSON will compress to at least a 10th of the original size because it is >> text with a lot of repetition. [Otto: yes, I downloaded a zip file once and >> could not read it in vim. Is this what I should do: compress the connection >> between nginx and the Zinc adaptor? Or should I send the json back as a >> compressed zip file?] >> 4. Otto, you should get to know nginx and its settings and understand all >> the stuff nginx spits out when debug logging is on. Better still, download >> the C source code; you should still be able to after only Smalltalking for >> 20 years. [Otto: Are you super human? Have you seen all of that? Please >> enlighten me as this will take me years.] >> >> Of course I missed some ideas. Please feel free to add them to the list. >> >> Otto Behrens >> >> +27 82 809 2375 >> [image: FINWorks] >> >> >> >> [image: FINWorks] >> www.finworks.biz >> >> Disclaimer & Confidentiality Note: This email is intended solely for the >> use of the individual or entity named above as it may contain information >> that is confidential and privileged. If you are not the intended recipient, >> be advised that any dissemination, distribution or copying of this email is >> strictly prohibited. FINWorks cannot be held liable by any person other >> than the addressee in respect of any opinions, conclusions, advice or other >> information contained in this email. >> >> >> On Fri, May 17, 2024 at 7:09?AM Otto Behrens wrote: >> >>> Hi, >>> >>> We are running into a performance problem where our API returns about >>> 70MB json content. We run a nginx web server which connects to >>> a WAGsZincAdaptor that we start in a topaz session. Do you perhaps have the >>> same kind of setup and can you please give me some advice on this? >>> >>> We found that converting objects to json (using Object >> asJson from >>> Seaside-JSON-Core) was not performing great, but was eating loads of memory >>> because of WABuilder >> render:. This is not the issue and we improved this >>> a bit (by eliminating String streamContents: and streaming more directly). >>> >>> The problem seems to be that after producing the json content, >>> transmitting the response takes a long time. >>> >>> As an experiment, I read a 16MB file from disk and returned that as the >>> result of an API call to eliminate all json producing code. I used curl as >>> a client on the same machine as the nginx server, stone and the topaz >>> session and it takes 26 seconds. This eliminates most overhead (no network >>> latency). >>> >>> The stack below is what I see most of the time: >>> >>> 1 SocketStream >> nextPutAll: @natCode+0x4d [GsNMethod 169113089] >>> FP: 0x7f2c0fee9930=StackLimit[-218] , callerFP: >>> StackLimit[-212] >>> arg 1:0x7f2bee7f0de0 (cls:103425 ByteArray size:16384) >>> rcvr: 0x7f2bff68f670 (cls:144280577 SocketStream size:12) >>> 2 ZnBivalentWriteStream >> next:putAll:startingAt: @natCode+0x2cf >>> [GsNMethod 158727169] >>> FP: 0x7f2c0fee9960=StackLimit[-212] , callerFP: >>> StackLimit[-202] >>> arg 3:69337098 (SmallInteger 8667137) >>> arg 2:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd >>> (size 16627226))' >>> arg 1:131074 (SmallInteger 16384) >>> rcvr: 0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >>> 3 ZnUtils class >> nextPutAll:on: @natCode+0x421 [GsNMethod 175369473] >>> FP: 0x7f2c0fee99b0=StackLimit[-202] , callerFP: >>> StackLimit[-196] >>> arg 2:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >>> arg 1:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd >>> (size 16627226))' >>> rcvr: 0x7f2c0c335750 oid:143053313 (cls:143054593 ZnUtils class >>> size:19) >>> 4 ZnByteArrayEntity >> writeOn: @natCode+0xdb [GsNMethod 269993473] >>> FP: 0x7f2c0fee99e0=StackLimit[-196] , callerFP: >>> StackLimit[-186] >>> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >>> rcvr: 0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >>> 5 ZnEntityWriter >> writeEntity: @natCode+0x382 [GsNMethod 269988609] >>> FP: 0x7f2c0fee9a30=StackLimit[-186] , callerFP: >>> StackLimit[-180] >>> arg 1:0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >>> rcvr: 0x7f2c00675398 (cls:145876737 ZnEntityWriter size:2) >>> 6 ZnMessage >> writeOn: @natCode+0x295 [GsNMethod 158696193] >>> FP: 0x7f2c0fee9a60=StackLimit[-180] , callerFP: >>> StackLimit[-174] >>> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >>> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >>> 7 ZnResponse >> writeOn: @natCode+0x1f0 [GsNMethod 155024025857] >>> FP: 0x7f2c0fee9a90=StackLimit[-174] , callerFP: >>> StackLimit[-169] >>> arg 1:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >>> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >>> 8 ZnSingleThreadedServer >> writeResponse:on: @natCode+0xa3 >>> [GsNMethod 169204737] >>> FP: 0x7f2c0fee9ab8=StackLimit[-169] , callerFP: >>> StackLimit[-162] >>> arg 2:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >>> arg 1:0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >>> rcvr: 0x7f2bff5de528 oid:4763064833 (cls:144532225 >>> ZnManagingMultiThreadedServer size:9) >>> >>> Kind regards >>> >>> Otto Behrens >>> >>> +27 82 809 2375 >>> [image: FINWorks] >>> >>> >>> >>> [image: FINWorks] >>> www.finworks.biz >>> >>> Disclaimer & Confidentiality Note: This email is intended solely for the >>> use of the individual or entity named above as it may contain information >>> that is confidential and privileged. If you are not the intended recipient, >>> be advised that any dissemination, distribution or copying of this email is >>> strictly prohibited. FINWorks cannot be held liable by any person other >>> than the addressee in respect of any opinions, conclusions, advice or other >>> information contained in this email. >>> >> >> _______________________________________________ >> Glass mailing listGlass at lists.gemtalksystems.comhttps://lists.gemtalksystems.com/mailman/listinfo/glass >> >> _______________________________________________ >> Glass mailing list >> Glass at lists.gemtalksystems.com >> https://lists.gemtalksystems.com/mailman/listinfo/glass >> > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > -------------- next part -------------- An HTML attachment was scrubbed... URL: From otto at finworks.biz Mon May 20 10:04:41 2024 From: otto at finworks.biz (Otto Behrens) Date: Mon, 20 May 2024 19:04:41 +0200 Subject: [Glass] rest api returning large result In-Reply-To: References: Message-ID: Thanks Richard, response below. > The smalltalk code is reasonably expensive, but my tests (using kill >> -USR1) revealed that it was mostly spending time in SocketStream >> >> nextPutAll: >> > > I know there were changes in recent versions of GemStone to provide better > stream support for the Seaside use cases. I don't recall the details, but > there is at least an AppendStream optimized for string building with > primitives for some methods, I think. I just checked: "recent" is actually > version 3.4! > > Do you see that class in the profile stacks or some other Stream class? > We are running 3.6.5. I don't see streaming on the profile stacks. Mostly Zinc writing to a socket. > > >>> Best regards, >>> Ralph >>> >>> >>> Am 20.05.2024 um 09:48 schrieb Otto Behrens via Glass: >>> >>> We have not managed to fix this yet. What is your opinion on the >>> following ideas? >>> >>> 1. Otto, you are an idiot. Why would you be sending a 70MB json response >>> out on a REST API? This is not how you do an API. [Otto: that may well be >>> true. How should I then get the data across to the user? Is there someone >>> that can help me with a solution?] >>> 2. Otto, you have not kept up to date with things and you are the only >>> one in the whole world that's using WAGsZincAdaptor serving as an nginx >>> upstream. WTF. [Otto: Yes, we are here on the bottom tip of Africa where >>> the internet is slow and we read even slower, sorry about that. Please help >>> me with some sites, documents and any other material so that I can start >>> reading.] >>> 3. Otto, have you heard of the idea of compression? You should know that >>> JSON will compress to at least a 10th of the original size because it is >>> text with a lot of repetition. [Otto: yes, I downloaded a zip file once and >>> could not read it in vim. Is this what I should do: compress the connection >>> between nginx and the Zinc adaptor? Or should I send the json back as a >>> compressed zip file?] >>> 4. Otto, you should get to know nginx and its settings and understand >>> all the stuff nginx spits out when debug logging is on. Better still, >>> download the C source code; you should still be able to after only >>> Smalltalking for 20 years. [Otto: Are you super human? Have you seen all of >>> that? Please enlighten me as this will take me years.] >>> >>> Of course I missed some ideas. Please feel free to add them to the list. >>> >>> Otto Behrens >>> >>> +27 82 809 2375 >>> [image: FINWorks] >>> >>> >>> >>> [image: FINWorks] >>> www.finworks.biz >>> >>> Disclaimer & Confidentiality Note: This email is intended solely for the >>> use of the individual or entity named above as it may contain information >>> that is confidential and privileged. If you are not the intended recipient, >>> be advised that any dissemination, distribution or copying of this email is >>> strictly prohibited. FINWorks cannot be held liable by any person other >>> than the addressee in respect of any opinions, conclusions, advice or other >>> information contained in this email. >>> >>> >>> On Fri, May 17, 2024 at 7:09?AM Otto Behrens wrote: >>> >>>> Hi, >>>> >>>> We are running into a performance problem where our API returns about >>>> 70MB json content. We run a nginx web server which connects to >>>> a WAGsZincAdaptor that we start in a topaz session. Do you perhaps have the >>>> same kind of setup and can you please give me some advice on this? >>>> >>>> We found that converting objects to json (using Object >> asJson from >>>> Seaside-JSON-Core) was not performing great, but was eating loads of memory >>>> because of WABuilder >> render:. This is not the issue and we improved this >>>> a bit (by eliminating String streamContents: and streaming more directly). >>>> >>>> The problem seems to be that after producing the json content, >>>> transmitting the response takes a long time. >>>> >>>> As an experiment, I read a 16MB file from disk and returned that as the >>>> result of an API call to eliminate all json producing code. I used curl as >>>> a client on the same machine as the nginx server, stone and the topaz >>>> session and it takes 26 seconds. This eliminates most overhead (no network >>>> latency). >>>> >>>> The stack below is what I see most of the time: >>>> >>>> 1 SocketStream >> nextPutAll: @natCode+0x4d [GsNMethod 169113089] >>>> FP: 0x7f2c0fee9930=StackLimit[-218] , callerFP: >>>> StackLimit[-212] >>>> arg 1:0x7f2bee7f0de0 (cls:103425 ByteArray size:16384) >>>> rcvr: 0x7f2bff68f670 (cls:144280577 SocketStream size:12) >>>> 2 ZnBivalentWriteStream >> next:putAll:startingAt: @natCode+0x2cf >>>> [GsNMethod 158727169] >>>> FP: 0x7f2c0fee9960=StackLimit[-212] , callerFP: >>>> StackLimit[-202] >>>> arg 3:69337098 (SmallInteger 8667137) >>>> arg 2:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd >>>> (size 16627226))' >>>> arg 1:131074 (SmallInteger 16384) >>>> rcvr: 0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >>>> 3 ZnUtils class >> nextPutAll:on: @natCode+0x421 [GsNMethod >>>> 175369473] >>>> FP: 0x7f2c0fee99b0=StackLimit[-202] , callerFP: >>>> StackLimit[-196] >>>> arg 2:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >>>> arg 1:0x7f2c0064fe50 (cls:74753 String size:16627226)'(large_or_fwd >>>> (size 16627226))' >>>> rcvr: 0x7f2c0c335750 oid:143053313 (cls:143054593 ZnUtils class >>>> size:19) >>>> 4 ZnByteArrayEntity >> writeOn: @natCode+0xdb [GsNMethod 269993473] >>>> FP: 0x7f2c0fee99e0=StackLimit[-196] , callerFP: >>>> StackLimit[-186] >>>> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >>>> rcvr: 0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >>>> 5 ZnEntityWriter >> writeEntity: @natCode+0x382 [GsNMethod 269988609] >>>> FP: 0x7f2c0fee9a30=StackLimit[-186] , callerFP: >>>> StackLimit[-180] >>>> arg 1:0x7f2c00651f00 (cls:145996545 ZnByteArrayEntity size:3) >>>> rcvr: 0x7f2c00675398 (cls:145876737 ZnEntityWriter size:2) >>>> 6 ZnMessage >> writeOn: @natCode+0x295 [GsNMethod 158696193] >>>> FP: 0x7f2c0fee9a60=StackLimit[-180] , callerFP: >>>> StackLimit[-174] >>>> arg 1:0x7f2c00675370 (cls:145086465 ZnBivalentWriteStream size:2) >>>> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >>>> 7 ZnResponse >> writeOn: @natCode+0x1f0 [GsNMethod 155024025857] >>>> FP: 0x7f2c0fee9a90=StackLimit[-174] , callerFP: >>>> StackLimit[-169] >>>> arg 1:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >>>> rcvr: 0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >>>> 8 ZnSingleThreadedServer >> writeResponse:on: @natCode+0xa3 >>>> [GsNMethod 169204737] >>>> FP: 0x7f2c0fee9ab8=StackLimit[-169] , callerFP: >>>> StackLimit[-162] >>>> arg 2:0x7f2bff68f670 (cls:144280577 SocketStream size:12) >>>> arg 1:0x7f2c0064fe20 (cls:145901313 ZnResponse size:3) >>>> rcvr: 0x7f2bff5de528 oid:4763064833 (cls:144532225 >>>> ZnManagingMultiThreadedServer size:9) >>>> >>>> Kind regards >>>> >>>> Otto Behrens >>>> >>>> +27 82 809 2375 >>>> [image: FINWorks] >>>> >>>> >>>> >>>> [image: FINWorks] >>>> www.finworks.biz >>>> >>>> Disclaimer & Confidentiality Note: This email is intended solely for >>>> the use of the individual or entity named above as it may contain >>>> information that is confidential and privileged. If you are not the >>>> intended recipient, be advised that any dissemination, distribution or >>>> copying of this email is strictly prohibited. FINWorks cannot be held >>>> liable by any person other than the addressee in respect of any opinions, >>>> conclusions, advice or other information contained in this email. >>>> >>> >>> _______________________________________________ >>> Glass mailing listGlass at lists.gemtalksystems.comhttps://lists.gemtalksystems.com/mailman/listinfo/glass >>> >>> _______________________________________________ >>> Glass mailing list >>> Glass at lists.gemtalksystems.com >>> https://lists.gemtalksystems.com/mailman/listinfo/glass >>> >> _______________________________________________ >> Glass mailing list >> Glass at lists.gemtalksystems.com >> https://lists.gemtalksystems.com/mailman/listinfo/glass >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From otto at finworks.biz Mon May 20 10:08:47 2024 From: otto at finworks.biz (Otto Behrens) Date: Mon, 20 May 2024 19:08:47 +0200 Subject: [Glass] rest api returning large result In-Reply-To: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> References: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> Message-ID: > > Okay, I?ll give that a try and see what I can get when using the Zinc > adaptor in GemStone? > Thanks > Btw, if you have the file on disk, consider using X-Sendfile protocol to > nginx. > Something like this, where ?document url? is the url where it is reachable > through nginx: > Yes, we use this extensively. We serve pdf documents and things like that. The rest API data can possibly also be written to a file. But not for all the api endpoints, maybe for this specific one that's giving us trouble. But that surely just bypasses the issue? > self requestContext > respond: [ :response | > response headerAt: 'X-Accel-Redirect' put: document url ] > > I?m using FastCGI in production, and serving large json files as well. Did >> not see this performance issue pop up though. >> > > O dear, we did use FastCGI many moons ago and ended up reverting to an > HTTP proxy. It was a bit easier to work with as HTTP is more readable, but > I just remember it was a bit of a battle. > > > I would not recommend it anymore in the sense that the protocol itself is > outdated and prohibits things like websockets. > But I mentioned it to say that the performance issue might very well be in > the Zinc Adaptor for GemStone. > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdebruic at gmail.com Mon May 20 16:46:55 2024 From: pdebruic at gmail.com (Paul DeBruicker) Date: Mon, 20 May 2024 23:46:55 +0000 Subject: [Glass] rest api returning large result In-Reply-To: References: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> Message-ID: Have you tried the WAGsZincStreamingServerAdaptor or WAStreamedResponse classes? In the WAGsZincStreamingServerAdaptor you'd need to override the #responseTo: method to use the WAStreamedResponse explicitly to get rid of all adaptor buffering. *I think* Not sure that it would help but they're there and when I was messing with this https://github.com/SeasideSt/Seaside/pull/1319 I think they were working OK. On 5/20/24 17:08, Otto Behrens via Glass wrote: > Okay, I?ll give that a try and see what I can get when using the > Zinc adaptor in GemStone? > > > Thanks > > Btw, if you have the file on disk, consider using X-Sendfile > protocol to nginx. > Something like this, where ?document url? is the url where it is > reachable through nginx: > > > Yes, we use this extensively. We serve pdf documents and things?like > that. The rest API data can possibly also be written to a file. But not > for all the api endpoints, maybe for this specific one that's giving us > trouble. But that surely just bypasses the issue? > > self requestContext > respond: [ :response | > response headerAt: 'X-Accel-Redirect' put: document url ] > >> I?m using FastCGI in production, and serving large json files >> as well. Did not see this performance issue pop up though. >> >> >> O dear, we did use FastCGI many moons ago and ended up reverting >> to an HTTP proxy. It was a bit easier to work with as HTTP is more >> readable, but I just remember it was a bit of a battle. > > I would not recommend it anymore in the sense that the protocol > itself is outdated and prohibits things like websockets. > But I mentioned it to say that the performance issue might very well > be in the Zinc Adaptor for GemStone. > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > > > > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass From pdebruic at gmail.com Mon May 20 17:50:56 2024 From: pdebruic at gmail.com (Paul DeBruicker) Date: Tue, 21 May 2024 00:50:56 +0000 Subject: [Glass] rest api returning large result In-Reply-To: References: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> Message-ID: <74598fbd-5ae5-481d-8be3-aa45267e7abe@gmail.com> Oh also - I wonder if you're hitting the request timeout so retry logic the Seaside/GemStone handlers have. It would take no time in a profiler but cause the response generation to happen up to 10 times. On 5/20/24 23:46, Paul DeBruicker wrote: > Have you tried the WAGsZincStreamingServerAdaptor or WAStreamedResponse > classes? > > In the WAGsZincStreamingServerAdaptor you'd need to override the > #responseTo: method to use the WAStreamedResponse explicitly to get rid > of all adaptor buffering.? *I think* > > > Not sure that it would help but they're there and when I was messing > with this https://github.com/SeasideSt/Seaside/pull/1319 I think they > were working OK. > > > > On 5/20/24 17:08, Otto Behrens via Glass wrote: >> ??? Okay, I?ll give that a try and see what I can get when using the >> ??? Zinc adaptor in GemStone? >> >> >> Thanks >> >> ??? Btw, if you have the file on disk, consider using X-Sendfile >> ??? protocol to nginx. >> ??? Something like this, where ?document url? is the url where it is >> ??? reachable through nginx: >> >> >> Yes, we use this extensively. We serve pdf documents and things?like >> that. The rest API data can possibly also be written to a file. But >> not for all the api endpoints, maybe for this specific one that's >> giving us trouble. But that surely just bypasses the issue? >> >> ??? self requestContext >> ??? respond: [ :response | >> ??? response headerAt: 'X-Accel-Redirect' put: document url ] >> >>> ??????? I?m using FastCGI in production, and serving large json files >>> ??????? as well. Did not see this performance issue pop up though. >>> >>> >>> ??? O dear, we did use FastCGI many moons ago and ended up reverting >>> ??? to an HTTP proxy. It was a bit easier to work with as HTTP is more >>> ??? readable, but I just remember it was a bit of a battle. >> >> ??? I would not recommend it anymore in the sense that the protocol >> ??? itself is outdated and prohibits things like websockets. >> ??? But I mentioned it to say that the performance issue might very well >> ??? be in the Zinc Adaptor for GemStone. >> ??? _______________________________________________ >> ??? Glass mailing list >> ??? Glass at lists.gemtalksystems.com >> >> ??? https://lists.gemtalksystems.com/mailman/listinfo/glass >> ??? >> >> >> _______________________________________________ >> Glass mailing list >> Glass at lists.gemtalksystems.com >> https://lists.gemtalksystems.com/mailman/listinfo/glass From pdebruic at gmail.com Tue May 21 16:17:55 2024 From: pdebruic at gmail.com (Paul DeBruicker) Date: Tue, 21 May 2024 23:17:55 +0000 Subject: [Glass] autocomplete functionality with Gemstone 3.6 indexes Message-ID: <7468e8c9-d996-4dda-9af6-7e77574c7f0c@gmail.com> Hi - I've got a collection of 1.7 million people and want to search them by name (first/last in any order), preferably similarly to the autocomplete functionality we're all accustomed to. Is there a way to do that with the help of GemStone indexes from 3.6.4? Thanks Paul From otto at finworks.biz Tue May 21 23:06:01 2024 From: otto at finworks.biz (Otto Behrens) Date: Wed, 22 May 2024 08:06:01 +0200 Subject: [Glass] rest api returning large result In-Reply-To: References: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> Message-ID: Thanks a lot for your help, Paul. I will have a look. On Tue, 21 May 2024, 01:47 Paul DeBruicker via Glass, < glass at lists.gemtalksystems.com> wrote: > Have you tried the WAGsZincStreamingServerAdaptor or WAStreamedResponse > classes? > > In the WAGsZincStreamingServerAdaptor you'd need to override the > #responseTo: method to use the WAStreamedResponse explicitly to get rid > of all adaptor buffering. *I think* > > > Not sure that it would help but they're there and when I was messing > with this https://github.com/SeasideSt/Seaside/pull/1319 I think they > were working OK. > > > > On 5/20/24 17:08, Otto Behrens via Glass wrote: > > Okay, I?ll give that a try and see what I can get when using the > > Zinc adaptor in GemStone? > > > > > > Thanks > > > > Btw, if you have the file on disk, consider using X-Sendfile > > protocol to nginx. > > Something like this, where ?document url? is the url where it is > > reachable through nginx: > > > > > > Yes, we use this extensively. We serve pdf documents and things like > > that. The rest API data can possibly also be written to a file. But not > > for all the api endpoints, maybe for this specific one that's giving us > > trouble. But that surely just bypasses the issue? > > > > self requestContext > > respond: [ :response | > > response headerAt: 'X-Accel-Redirect' put: document url ] > > > >> I?m using FastCGI in production, and serving large json files > >> as well. Did not see this performance issue pop up though. > >> > >> > >> O dear, we did use FastCGI many moons ago and ended up reverting > >> to an HTTP proxy. It was a bit easier to work with as HTTP is more > >> readable, but I just remember it was a bit of a battle. > > > > I would not recommend it anymore in the sense that the protocol > > itself is outdated and prohibits things like websockets. > > But I mentioned it to say that the performance issue might very well > > be in the Zinc Adaptor for GemStone. > > _______________________________________________ > > Glass mailing list > > Glass at lists.gemtalksystems.com Glass at lists.gemtalksystems.com> > > https://lists.gemtalksystems.com/mailman/listinfo/glass > > > > > > > > _______________________________________________ > > Glass mailing list > > Glass at lists.gemtalksystems.com > > https://lists.gemtalksystems.com/mailman/listinfo/glass > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > -------------- next part -------------- An HTML attachment was scrubbed... URL: From otto at finworks.biz Tue May 21 23:07:34 2024 From: otto at finworks.biz (Otto Behrens) Date: Wed, 22 May 2024 08:07:34 +0200 Subject: [Glass] rest api returning large result In-Reply-To: <74598fbd-5ae5-481d-8be3-aa45267e7abe@gmail.com> References: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> <74598fbd-5ae5-481d-8be3-aa45267e7abe@gmail.com> Message-ID: I usually see this in logs when we hit retries, not 100% sure, will look out for it, thanks. On Tue, 21 May 2024, 02:51 Paul DeBruicker via Glass, < glass at lists.gemtalksystems.com> wrote: > Oh also - I wonder if you're hitting the request timeout so retry logic > the Seaside/GemStone handlers have. It would take no time in a profiler > but cause the response generation to happen up to 10 times. > > > > > On 5/20/24 23:46, Paul DeBruicker wrote: > > Have you tried the WAGsZincStreamingServerAdaptor or WAStreamedResponse > > classes? > > > > In the WAGsZincStreamingServerAdaptor you'd need to override the > > #responseTo: method to use the WAStreamedResponse explicitly to get rid > > of all adaptor buffering. *I think* > > > > > > Not sure that it would help but they're there and when I was messing > > with this https://github.com/SeasideSt/Seaside/pull/1319 I think they > > were working OK. > > > > > > > > On 5/20/24 17:08, Otto Behrens via Glass wrote: > >> Okay, I?ll give that a try and see what I can get when using the > >> Zinc adaptor in GemStone? > >> > >> > >> Thanks > >> > >> Btw, if you have the file on disk, consider using X-Sendfile > >> protocol to nginx. > >> Something like this, where ?document url? is the url where it is > >> reachable through nginx: > >> > >> > >> Yes, we use this extensively. We serve pdf documents and things like > >> that. The rest API data can possibly also be written to a file. But > >> not for all the api endpoints, maybe for this specific one that's > >> giving us trouble. But that surely just bypasses the issue? > >> > >> self requestContext > >> respond: [ :response | > >> response headerAt: 'X-Accel-Redirect' put: document url ] > >> > >>> I?m using FastCGI in production, and serving large json files > >>> as well. Did not see this performance issue pop up though. > >>> > >>> > >>> O dear, we did use FastCGI many moons ago and ended up reverting > >>> to an HTTP proxy. It was a bit easier to work with as HTTP is more > >>> readable, but I just remember it was a bit of a battle. > >> > >> I would not recommend it anymore in the sense that the protocol > >> itself is outdated and prohibits things like websockets. > >> But I mentioned it to say that the performance issue might very well > >> be in the Zinc Adaptor for GemStone. > >> _______________________________________________ > >> Glass mailing list > >> Glass at lists.gemtalksystems.com > >> > >> https://lists.gemtalksystems.com/mailman/listinfo/glass > >> > >> > >> > >> _______________________________________________ > >> Glass mailing list > >> Glass at lists.gemtalksystems.com > >> https://lists.gemtalksystems.com/mailman/listinfo/glass > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > -------------- next part -------------- An HTML attachment was scrubbed... URL: From richard.sargent at gemtalksystems.com Tue May 21 23:21:46 2024 From: richard.sargent at gemtalksystems.com (Richard Sargent) Date: Tue, 21 May 2024 23:21:46 -0700 Subject: [Glass] rest api returning large result In-Reply-To: References: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> <74598fbd-5ae5-481d-8be3-aa45267e7abe@gmail.com> Message-ID: One of our clients uses session stats to count things like retries. Might be a good idea to incorporate into the Seaside GemStone version. On Tue, May 21, 2024, 23:07 Otto Behrens via Glass < glass at lists.gemtalksystems.com> wrote: > I usually see this in logs when we hit retries, not 100% sure, will look > out for it, thanks. > > On Tue, 21 May 2024, 02:51 Paul DeBruicker via Glass, < > glass at lists.gemtalksystems.com> wrote: > >> Oh also - I wonder if you're hitting the request timeout so retry logic >> the Seaside/GemStone handlers have. It would take no time in a profiler >> but cause the response generation to happen up to 10 times. >> >> >> >> >> On 5/20/24 23:46, Paul DeBruicker wrote: >> > Have you tried the WAGsZincStreamingServerAdaptor or WAStreamedResponse >> > classes? >> > >> > In the WAGsZincStreamingServerAdaptor you'd need to override the >> > #responseTo: method to use the WAStreamedResponse explicitly to get rid >> > of all adaptor buffering. *I think* >> > >> > >> > Not sure that it would help but they're there and when I was messing >> > with this https://github.com/SeasideSt/Seaside/pull/1319 I think they >> > were working OK. >> > >> > >> > >> > On 5/20/24 17:08, Otto Behrens via Glass wrote: >> >> Okay, I?ll give that a try and see what I can get when using the >> >> Zinc adaptor in GemStone? >> >> >> >> >> >> Thanks >> >> >> >> Btw, if you have the file on disk, consider using X-Sendfile >> >> protocol to nginx. >> >> Something like this, where ?document url? is the url where it is >> >> reachable through nginx: >> >> >> >> >> >> Yes, we use this extensively. We serve pdf documents and things like >> >> that. The rest API data can possibly also be written to a file. But >> >> not for all the api endpoints, maybe for this specific one that's >> >> giving us trouble. But that surely just bypasses the issue? >> >> >> >> self requestContext >> >> respond: [ :response | >> >> response headerAt: 'X-Accel-Redirect' put: document url ] >> >> >> >>> I?m using FastCGI in production, and serving large json files >> >>> as well. Did not see this performance issue pop up though. >> >>> >> >>> >> >>> O dear, we did use FastCGI many moons ago and ended up reverting >> >>> to an HTTP proxy. It was a bit easier to work with as HTTP is more >> >>> readable, but I just remember it was a bit of a battle. >> >> >> >> I would not recommend it anymore in the sense that the protocol >> >> itself is outdated and prohibits things like websockets. >> >> But I mentioned it to say that the performance issue might very >> well >> >> be in the Zinc Adaptor for GemStone. >> >> _______________________________________________ >> >> Glass mailing list >> >> Glass at lists.gemtalksystems.com >> >> >> >> https://lists.gemtalksystems.com/mailman/listinfo/glass >> >> >> >> >> >> >> >> _______________________________________________ >> >> Glass mailing list >> >> Glass at lists.gemtalksystems.com >> >> https://lists.gemtalksystems.com/mailman/listinfo/glass >> _______________________________________________ >> Glass mailing list >> Glass at lists.gemtalksystems.com >> https://lists.gemtalksystems.com/mailman/listinfo/glass >> > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at uksmalltalk.org Wed May 22 03:29:57 2024 From: info at uksmalltalk.org (UK Smalltalk) Date: Wed, 22 May 2024 11:29:57 +0100 Subject: [Glass] UKSTUG Meeting - Javier Pimas -- Live Metacircular Runtimes: The case of Egg Smalltalk - 29 June 2024 Message-ID: Egg is a new Smalltalk dialect that was designed from scratch to incorporate some interesting features: A module system with namespaces that replaces the old-good Smalltalk global. Dynamic identifiers, which are bound lazily similarly to how methods are lazily bound. A multi-VM architecture, with different VM implementations written in C++, Pharo, JavaScript and Egg. The Egg-in-Egg VM is special in that the VM component is just another module of the system, creating what we have named Live Metacircular Runtimes (LMRs) [1]. The most interesting characteristic of LMRs in Smalltalk is that they can be developed using standard Smalltalk tools, which shorten feedback loops when doing VM development. During the talk I'll show a little bit about Egg and its LMR, and how not only VM developers get more productive when writing VMs, but also application developers can better understand what the VM does behind the scenes. [1] https://arxiv.org/abs/2312.16973 - Live Objects All The Way Down: Removing the Barriers between Applications and Virtual Machines Javier Pimas is a fan of high-level low-level programming. He has been successfully mixing Smalltalk and assembly code for more than a decade. Within Bee Smalltalk, SqueakNOS and now Egg Smalltalk projects he has been trying to make live-programming of system-level concerns more practical to application programmers. Javier works for Labware and Quorum Software, where he applies his expertise to real-world challenges while pursuing a PhD in Computer Science at Buenos Aires University. This will be an online meeting from home. If you'd like to join us, please sign up in advance on the meeting's Meetup page ( https://www.meetup.com/ukstug/events/300511322/ ) to receive the meeting details. Don?t forget to bring your laptop and drinks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan at yesplan.be Wed May 22 03:56:18 2024 From: johan at yesplan.be (Johan Brichau) Date: Wed, 22 May 2024 12:56:18 +0200 Subject: [Glass] rest api returning large result In-Reply-To: References: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> <74598fbd-5ae5-481d-8be3-aa45267e7abe@gmail.com> Message-ID: <2670A157-9EFE-4FD0-9F71-5950309ED294@yesplan.be> > On 22 May 2024, at 08:21, Richard Sargent via Glass wrote: > > One of our clients uses session stats to count things like retries. Might be a good idea to incorporate into the Seaside GemStone version. Good point and useful to all. Easy to add in https://github.com/SeasideSt/Seaside/blob/master/repository/Seaside-GemStone-Core.package/GRGemStonePlatform.extension/instance/seasideProcessRequestWithRetry.resultBlock..st Johan -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at uksmalltalk.org Wed May 22 08:01:45 2024 From: info at uksmalltalk.org (UK Smalltalk) Date: Wed, 22 May 2024 16:01:45 +0100 Subject: [Glass] DATE CORRECTION - UKSTUG Meeting - Javier Pimas -- Live Metacircular Runtimes: The case of Egg Smalltalk - 29 May 2024 Message-ID: Egg is a new Smalltalk dialect that was designed from scratch to incorporate some interesting features: A module system with namespaces that replaces the old-good Smalltalk global. Dynamic identifiers, which are bound lazily similarly to how methods are lazily bound. A multi-VM architecture, with different VM implementations written in C++, Pharo, JavaScript and Egg. The Egg-in-Egg VM is special in that the VM component is just another module of the system, creating what we have named Live Metacircular Runtimes (LMRs) [1]. The most interesting characteristic of LMRs in Smalltalk is that they can be developed using standard Smalltalk tools, which shorten feedback loops when doing VM development. During the talk I'll show a little bit about Egg and its LMR, and how not only VM developers get more productive when writing VMs, but also application developers can better understand what the VM does behind the scenes. [1] https://arxiv.org/abs/2312.16973 - Live Objects All The Way Down: Removing the Barriers between Applications and Virtual Machines Javier Pimas is a fan of high-level low-level programming. He has been successfully mixing Smalltalk and assembly code for more than a decade. Within Bee Smalltalk, SqueakNOS and now Egg Smalltalk projects he has been trying to make live-programming of system-level concerns more practical to application programmers. Javier works for Labware and Quorum Software, where he applies his expertise to real-world challenges while pursuing a PhD in Computer Science at Buenos Aires University. This will be an online meeting from home. If you'd like to join us, please sign up in advance on the meeting's Meetup page ( https://www.meetup.com/ukstug/events/300511322/ ) to receive the meeting details. Don?t forget to bring your laptop and drinks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From masashi.umezawa at gmail.com Fri May 24 05:39:07 2024 From: masashi.umezawa at gmail.com (Masashi UMEZAWA) Date: Fri, 24 May 2024 21:39:07 +0900 Subject: [Glass] autocomplete functionality with Gemstone 3.6 indexes In-Reply-To: <7468e8c9-d996-4dda-9af6-7e77574c7f0c@gmail.com> References: <7468e8c9-d996-4dda-9af6-7e77574c7f0c@gmail.com> Message-ID: Hello, I've once developed a Meilisearch client for GemStone/S. https://github.com/mumez/Meilisearch.st Mailiseach supports prefix search, so it can be used for autocompletion. https://www.meilisearch.com/docs/learn/advanced/prefix The downside is that you need to add another index for the text search. Best regards, 2024?5?22?(?) 8:35 Paul DeBruicker via Glass : > > Hi - > > I've got a collection of 1.7 million people and want to search them by > name (first/last in any order), preferably similarly to the autocomplete > functionality we're all accustomed to. > > > Is there a way to do that with the help of GemStone indexes from 3.6.4? > > Thanks > > Paul > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass -- [:masashi | ^umezawa] From ralph.mauersberger at gmx.net Fri May 24 10:14:38 2024 From: ralph.mauersberger at gmx.net (Ralph Mauersberger) Date: Fri, 24 May 2024 19:14:38 +0200 Subject: [Glass] autocomplete functionality with Gemstone 3.6 indexes In-Reply-To: <7468e8c9-d996-4dda-9af6-7e77574c7f0c@gmail.com> References: <7468e8c9-d996-4dda-9af6-7e77574c7f0c@gmail.com> Message-ID: Hello Paul, not sure about the real point of your question. Gemstone's Programming Guide has a chapter about creating and using indexes. You can expect very fast queries with runtimes of just a few milliseconds or even less. So it should fulfill your performance requirements for type-ahead-froms in WebApps very well. You can't send messages like beginsWith: or includesString: in your index-query's select block; you are limited to string comparison operands. If all you need is a prefix search, you can go with the following approach: Assuming you have a class Person with an instance variable name. Put all your Person Objects in a collection like (Rc)IdentityBag and create an Equalitiy Index on that instVar. To search your Persons by name prefix, build a (class side) method like this: withNamePrefix: namePrefix ??? | endString | ??? endString := namePrefix, 'ZZZZZ'. ??? ^ personCollection select: { :each | (each.name >= namePrefix) & (each.name < endString) }. This example uses Gemstone's old query syntax. On 3.6 you could/should go with the new GsQuery API. If you need more, e.g. case insensitive substring search, you could build a helper structure containing all substrings of the names. For example, build a class PersonNameSuffix with instVars substring and person. For the person with name "Abcd", build the PersonNameSuffix-Instances with substring "abcd", "bcd", "cd", "d" and let all these instances reference that person. Put these instances in a collection with index on instVar substring. Do that for all persons. You can then perform your search on that collection and collect all referenced persons from the result in an IdentitySet. Hope that helps. Ralph Am 22.05.2024 um 01:17 schrieb Paul DeBruicker via Glass: > Hi - > > I've got a collection of 1.7 million people and want to search them by > name (first/last in any order), preferably similarly to the > autocomplete functionality we're all accustomed to. > > > Is there a way to do that with the help of GemStone indexes from 3.6.4? > > Thanks > > Paul > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > From johan at yesplan.be Fri May 24 23:57:57 2024 From: johan at yesplan.be (Johan Brichau) Date: Sat, 25 May 2024 08:57:57 +0200 Subject: [Glass] rest api returning large result In-Reply-To: <65879E80-BF2A-46FE-977C-526FCB25425F@yesplan.be> References: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> <65879E80-BF2A-46FE-977C-526FCB25425F@yesplan.be> Message-ID: <0A668424-67E6-4048-9E87-03C09E389BDC@yesplan.be> The problem seems to be in `ZnUtils>>nextPutAll:on:` that splits the collection into chunks to write it to the Socket. When I remove that method and just pass through the writing of the collection as `stream nextPutAll: collection`, I go from 39s to 1s (see below, before and after) Now: understand why that piece of code is there and if it?s needed or not :-) Johan johanbrichau at JohansMacBookAir ~ % time curl http://localhost:8383/test --output bla.txt % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 55.2M 100 55.2M 0 0 1465k 0 0:00:38 0:00:38 --:--:-- 1476k curl http://localhost:8383/test -output bla.txt 0.04s user 0.14s system 0% cpu 38.657 total johanbrichau at JohansMacBookAir ~ % time curl http://localhost:8383/test --output bla.txt % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 55.2M 100 55.2M 0 0 45.0M 0 0:00:01 0:00:01 --:--:-- 45.0M curl http://localhost:8383/test --output bla.txt 0.01s user 0.07s system 6% cpu 1.255 total > On 23 May 2024, at 23:45, Johan Brichau wrote: > > Hi Otto, > >> On 20 May 2024, at 19:08, Otto Behrens wrote: >> >> Okay, I?ll give that a try and see what I can get when using the Zinc adaptor in GemStone? > > > It took me a bit longer to actually get started since I wanted to debug this on my Mac using a GsDevKit_stones setup and I had to dig into another rabbit hole first :-) > Anyway, I setup the simple handler below and I can confirm also notice that a file of 60MB takes 40s to be served and 28MB takes 10s. > > In Pharo, the 60MB file takes 3s. > I?ll dive into this a bit more tomorrow?.. > > get > > > > > > | file | > file := GsFile openReadOnServer: '/Users/johanbrichau/testfile'. > ^ file contents > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan at yesplan.be Sat May 25 00:11:31 2024 From: johan at yesplan.be (Johan Brichau) Date: Sat, 25 May 2024 09:11:31 +0200 Subject: [Glass] rest api returning large result In-Reply-To: <0A668424-67E6-4048-9E87-03C09E389BDC@yesplan.be> References: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> <65879E80-BF2A-46FE-977C-526FCB25425F@yesplan.be> <0A668424-67E6-4048-9E87-03C09E389BDC@yesplan.be> Message-ID: <04034116-8473-4E7A-A1F6-8F448F2DA7B4@yesplan.be> Other observations (before I head off doing other worldly things one has to do on Saturdays ;-): - Changing `ZnUtils>>streamingBufferSize` from 16384 to 163840 (i.e. x10) also improves the performance (as expected) to 4s response time for the 55MB file - In Pharo, the same code is used and there it takes 3s for the same file, same code (not quite the same Zinc version though, but the methods I mentioned are still the same) Johan > On 25 May 2024, at 08:57, Johan Brichau wrote: > > The problem seems to be in `ZnUtils>>nextPutAll:on:` that splits the collection into chunks to write it to the Socket. > When I remove that method and just pass through the writing of the collection as `stream nextPutAll: collection`, I go from 39s to 1s (see below, before and after) > > Now: understand why that piece of code is there and if it?s needed or not :-) > > Johan > > johanbrichau at JohansMacBookAir ~ % time curl http://localhost:8383/test --output bla.txt > % Total % Received % Xferd Average Speed Time Time Time Current > Dload Upload Total Spent Left Speed > 100 55.2M 100 55.2M 0 0 1465k 0 0:00:38 0:00:38 --:--:-- 1476k > curl http://localhost:8383/test -output bla.txt 0.04s user 0.14s system 0% cpu 38.657 total > > > johanbrichau at JohansMacBookAir ~ % time curl http://localhost:8383/test --output bla.txt > % Total % Received % Xferd Average Speed Time Time Time Current > Dload Upload Total Spent Left Speed > 100 55.2M 100 55.2M 0 0 45.0M 0 0:00:01 0:00:01 --:--:-- 45.0M > curl http://localhost:8383/test --output bla.txt 0.01s user 0.07s system 6% cpu 1.255 total > > >> On 23 May 2024, at 23:45, Johan Brichau wrote: >> >> Hi Otto, >> >>> On 20 May 2024, at 19:08, Otto Behrens wrote: >>> >>> Okay, I?ll give that a try and see what I can get when using the Zinc adaptor in GemStone? >> >> >> It took me a bit longer to actually get started since I wanted to debug this on my Mac using a GsDevKit_stones setup and I had to dig into another rabbit hole first :-) >> Anyway, I setup the simple handler below and I can confirm also notice that a file of 60MB takes 40s to be served and 28MB takes 10s. >> >> In Pharo, the 60MB file takes 3s. >> I?ll dive into this a bit more tomorrow?.. >> >> get >> >> >> >> >> >> | file | >> file := GsFile openReadOnServer: '/Users/johanbrichau/testfile'. >> ^ file contents >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From otto at finworks.biz Sat May 25 01:32:19 2024 From: otto at finworks.biz (Otto Behrens) Date: Sat, 25 May 2024 10:32:19 +0200 Subject: [Glass] rest api returning large result In-Reply-To: <04034116-8473-4E7A-A1F6-8F448F2DA7B4@yesplan.be> References: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> <65879E80-BF2A-46FE-977C-526FCB25425F@yesplan.be> <0A668424-67E6-4048-9E87-03C09E389BDC@yesplan.be> <04034116-8473-4E7A-A1F6-8F448F2DA7B4@yesplan.be> Message-ID: Johan, this is magic, thanks a lot. Wow On Sat, 25 May 2024, 09:11 Johan Brichau via Glass, < glass at lists.gemtalksystems.com> wrote: > Other observations (before I head off doing other worldly things one has > to do on Saturdays ;-): > > - Changing `ZnUtils>>streamingBufferSize` from 16384 to 163840 (i.e. x10) > also improves the performance (as expected) to 4s response time for the > 55MB file > - In Pharo, the same code is used and there it takes 3s for the same file, > same code (not quite the same Zinc version though, but the methods I > mentioned are still the same) > > Johan > > On 25 May 2024, at 08:57, Johan Brichau wrote: > > The problem seems to be in `ZnUtils>>nextPutAll:on:` that splits the > collection into chunks to write it to the Socket. > When I remove that method and just pass through the writing of the > collection as `stream nextPutAll: collection`, I go from 39s to 1s (see > below, before and after) > > Now: understand why that piece of code is there and if it?s needed or not > :-) > > Johan > > johanbrichau at JohansMacBookAir ~ % time curl http://localhost:8383/test > --output bla.txt > % Total % Received % Xferd Average Speed Time Time Time > Current > Dload Upload Total Spent Left > Speed > 100 55.2M 100 55.2M 0 0 1465k 0 0:00:38 0:00:38 --:--:-- > 1476k > curl http://localhost:8383/test -output bla.txt 0.04s user 0.14s system > 0% cpu 38.657 total > > > johanbrichau at JohansMacBookAir ~ % time curl http://localhost:8383/test > --output bla.txt > % Total % Received % Xferd Average Speed Time Time Time > Current > Dload Upload Total Spent Left > Speed > 100 55.2M 100 55.2M 0 0 45.0M 0 0:00:01 0:00:01 --:--:-- > 45.0M > curl http://localhost:8383/test --output bla.txt 0.01s user 0.07s system > 6% cpu 1.255 total > > > On 23 May 2024, at 23:45, Johan Brichau wrote: > > Hi Otto, > > On 20 May 2024, at 19:08, Otto Behrens wrote: > > Okay, I?ll give that a try and see what I can get when using the Zinc > adaptor in GemStone? > > > > It took me a bit longer to actually get started since I wanted to debug > this on my Mac using a GsDevKit_stones setup and I had to dig into another > rabbit hole first :-) > Anyway, I setup the simple handler below and I can confirm also notice > that a file of 60MB takes 40s to be served and 28MB takes 10s. > > In Pharo, the 60MB file takes 3s. > I?ll dive into this a bit more tomorrow?.. > > get > > > > > > | file | > file := GsFile openReadOnServer: '/Users/johanbrichau/testfile'. > ^ file contents > > > > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johan at yesplan.be Sun May 26 04:21:21 2024 From: johan at yesplan.be (Johan Brichau) Date: Sun, 26 May 2024 13:21:21 +0200 Subject: [Glass] rest api returning large result In-Reply-To: References: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> <65879E80-BF2A-46FE-977C-526FCB25425F@yesplan.be> <0A668424-67E6-4048-9E87-03C09E389BDC@yesplan.be> <04034116-8473-4E7A-A1F6-8F448F2DA7B4@yesplan.be> Message-ID: In the meantime, I found the real cause in ZnBivalentWriteStream>>#next:putAll:startingAt: in a change that was made 11 years ago in commit [1] ((by myself... :roll-eyes:) The incoming collection argument is converted to a ByteArray on each call of this iteration. The trouble is that this is the entire collection and not just the part that is to be written in the iteration. That means that for every 16Kb of data, the entire collection of 55MB (in my testcase) is converted to a ByteArray?. I?m working on a fix for this in https://github.com/GsDevKit/zinc/pull/105 It looks like simply removing the conversion and aligning the code of ZnBivalentWriteStream>>#next:putAll:startingAt: with its original in Pharo fixes the problem altogether. I?m not sure if the change was done inadvertently or if it did fix issues. I did some manual tests with Seaside in a GS 3.6.8 and it looked fine. In any case: fixing this will yield significant performance improvements for anyone running Zinc in GemStone?. not just for very large responses where the problem is very apparent. [1] https://github.com/GsDevKit/zinc/commit/cbddf8b52589ad2107feba8557e27db5cad5acbd#diff-b940261c40f3e320ede03be3b7ac8c7fe5ff81ec310656aa6b557b069cce1dc2 > On 25 May 2024, at 10:32, Otto Behrens wrote: > > Johan, this is magic, thanks a lot. Wow > > On Sat, 25 May 2024, 09:11 Johan Brichau via Glass, > wrote: >> Other observations (before I head off doing other worldly things one has to do on Saturdays ;-): >> >> - Changing `ZnUtils>>streamingBufferSize` from 16384 to 163840 (i.e. x10) also improves the performance (as expected) to 4s response time for the 55MB file >> - In Pharo, the same code is used and there it takes 3s for the same file, same code (not quite the same Zinc version though, but the methods I mentioned are still the same) >> >> Johan >> >>> On 25 May 2024, at 08:57, Johan Brichau > wrote: >>> >>> The problem seems to be in `ZnUtils>>nextPutAll:on:` that splits the collection into chunks to write it to the Socket. >>> When I remove that method and just pass through the writing of the collection as `stream nextPutAll: collection`, I go from 39s to 1s (see below, before and after) >>> >>> Now: understand why that piece of code is there and if it?s needed or not :-) >>> >>> Johan >>> >>> johanbrichau at JohansMacBookAir ~ % time curl http://localhost:8383/test --output bla.txt >>> % Total % Received % Xferd Average Speed Time Time Time Current >>> Dload Upload Total Spent Left Speed >>> 100 55.2M 100 55.2M 0 0 1465k 0 0:00:38 0:00:38 --:--:-- 1476k >>> curl http://localhost:8383/test -output bla.txt 0.04s user 0.14s system 0% cpu 38.657 total >>> >>> >>> johanbrichau at JohansMacBookAir ~ % time curl http://localhost:8383/test --output bla.txt >>> % Total % Received % Xferd Average Speed Time Time Time Current >>> Dload Upload Total Spent Left Speed >>> 100 55.2M 100 55.2M 0 0 45.0M 0 0:00:01 0:00:01 --:--:-- 45.0M >>> curl http://localhost:8383/test --output bla.txt 0.01s user 0.07s system 6% cpu 1.255 total >>> >>> >>>> On 23 May 2024, at 23:45, Johan Brichau > wrote: >>>> >>>> Hi Otto, >>>> >>>>> On 20 May 2024, at 19:08, Otto Behrens > wrote: >>>>> >>>>> Okay, I?ll give that a try and see what I can get when using the Zinc adaptor in GemStone? >>>> >>>> >>>> It took me a bit longer to actually get started since I wanted to debug this on my Mac using a GsDevKit_stones setup and I had to dig into another rabbit hole first :-) >>>> Anyway, I setup the simple handler below and I can confirm also notice that a file of 60MB takes 40s to be served and 28MB takes 10s. >>>> >>>> In Pharo, the 60MB file takes 3s. >>>> I?ll dive into this a bit more tomorrow?.. >>>> >>>> get >>>> >>>> >>>> >>>> >>>> >>>> | file | >>>> file := GsFile openReadOnServer: '/Users/johanbrichau/testfile'. >>>> ^ file contents >>>> >>> >> >> _______________________________________________ >> Glass mailing list >> Glass at lists.gemtalksystems.com >> https://lists.gemtalksystems.com/mailman/listinfo/glass -------------- next part -------------- An HTML attachment was scrubbed... URL: From otto at finworks.biz Sun May 26 08:25:37 2024 From: otto at finworks.biz (Otto Behrens) Date: Sun, 26 May 2024 17:25:37 +0200 Subject: [Glass] rest api returning large result In-Reply-To: References: <1A2F2AA2-8F61-44EB-BABA-985E093AD167@yesplan.be> <65879E80-BF2A-46FE-977C-526FCB25425F@yesplan.be> <0A668424-67E6-4048-9E87-03C09E389BDC@yesplan.be> <04034116-8473-4E7A-A1F6-8F448F2DA7B4@yesplan.be> Message-ID: Johan, thank you very much. Otto Behrens +27 82 809 2375 [image: FINWorks] [image: FINWorks] www.finworks.biz Disclaimer & Confidentiality Note: This email is intended solely for the use of the individual or entity named above as it may contain information that is confidential and privileged. If you are not the intended recipient, be advised that any dissemination, distribution or copying of this email is strictly prohibited. FINWorks cannot be held liable by any person other than the addressee in respect of any opinions, conclusions, advice or other information contained in this email. On Sun, May 26, 2024 at 1:21?PM Johan Brichau via Glass < glass at lists.gemtalksystems.com> wrote: > In the meantime, I found the real cause in > ZnBivalentWriteStream>>#next:putAll:startingAt: in a change that was made > 11 years ago in commit [1] ((by myself... :roll-eyes:) > > The incoming collection argument is converted to a ByteArray on each call > of this iteration. > The trouble is that this is the entire collection and not just the part > that is to be written in the iteration. > That means that for every 16Kb of data, the entire collection of 55MB (in > my testcase) is converted to a ByteArray?. > > I?m working on a fix for this in https://github.com/GsDevKit/zinc/pull/105 > It looks like simply removing the conversion and aligning the code of > ZnBivalentWriteStream>>#next:putAll:startingAt: with its original in Pharo > fixes the problem altogether. > > I?m not sure if the change was done inadvertently or if it did fix issues. > I did some manual tests with Seaside in a GS 3.6.8 and it looked fine. > > In any case: fixing this will yield significant performance improvements > for anyone running Zinc in GemStone?. not just for very large responses > where the problem is very apparent. > > [1] > https://github.com/GsDevKit/zinc/commit/cbddf8b52589ad2107feba8557e27db5cad5acbd#diff-b940261c40f3e320ede03be3b7ac8c7fe5ff81ec310656aa6b557b069cce1dc2 > > > On 25 May 2024, at 10:32, Otto Behrens wrote: > > Johan, this is magic, thanks a lot. Wow > > On Sat, 25 May 2024, 09:11 Johan Brichau via Glass, < > glass at lists.gemtalksystems.com> wrote: > >> Other observations (before I head off doing other worldly things one has >> to do on Saturdays ;-): >> >> - Changing `ZnUtils>>streamingBufferSize` from 16384 to 163840 (i.e. x10) >> also improves the performance (as expected) to 4s response time for the >> 55MB file >> - In Pharo, the same code is used and there it takes 3s for the same >> file, same code (not quite the same Zinc version though, but the methods I >> mentioned are still the same) >> >> Johan >> >> On 25 May 2024, at 08:57, Johan Brichau wrote: >> >> The problem seems to be in `ZnUtils>>nextPutAll:on:` that splits the >> collection into chunks to write it to the Socket. >> When I remove that method and just pass through the writing of the >> collection as `stream nextPutAll: collection`, I go from 39s to 1s (see >> below, before and after) >> >> Now: understand why that piece of code is there and if it?s needed or not >> :-) >> >> Johan >> >> johanbrichau at JohansMacBookAir ~ % time curl http://localhost:8383/test >> --output bla.txt >> % Total % Received % Xferd Average Speed Time Time Time >> Current >> Dload Upload Total Spent Left >> Speed >> 100 55.2M 100 55.2M 0 0 1465k 0 0:00:38 0:00:38 --:--:-- >> 1476k >> curl http://localhost:8383/test -output bla.txt 0.04s user 0.14s system >> 0% cpu 38.657 total >> >> >> johanbrichau at JohansMacBookAir ~ % time curl http://localhost:8383/test >> --output bla.txt >> % Total % Received % Xferd Average Speed Time Time Time >> Current >> Dload Upload Total Spent Left >> Speed >> 100 55.2M 100 55.2M 0 0 45.0M 0 0:00:01 0:00:01 --:--:-- >> 45.0M >> curl http://localhost:8383/test --output bla.txt 0.01s user 0.07s >> system 6% cpu 1.255 total >> >> >> On 23 May 2024, at 23:45, Johan Brichau wrote: >> >> Hi Otto, >> >> On 20 May 2024, at 19:08, Otto Behrens wrote: >> >> Okay, I?ll give that a try and see what I can get when using the Zinc >> adaptor in GemStone? >> >> >> >> It took me a bit longer to actually get started since I wanted to debug >> this on my Mac using a GsDevKit_stones setup and I had to dig into another >> rabbit hole first :-) >> Anyway, I setup the simple handler below and I can confirm also notice >> that a file of 60MB takes 40s to be served and 28MB takes 10s. >> >> In Pharo, the 60MB file takes 3s. >> I?ll dive into this a bit more tomorrow?.. >> >> get >> >> >> >> >> >> | file | >> file := GsFile openReadOnServer: '/Users/johanbrichau/testfile'. >> ^ file contents >> >> >> >> _______________________________________________ >> Glass mailing list >> Glass at lists.gemtalksystems.com >> https://lists.gemtalksystems.com/mailman/listinfo/glass >> > > _______________________________________________ > Glass mailing list > Glass at lists.gemtalksystems.com > https://lists.gemtalksystems.com/mailman/listinfo/glass > -------------- next part -------------- An HTML attachment was scrubbed... URL: