[Glass] Fwd: GLASS performance & cleanup scripts
Leo De Marco (Smalltalking)
leo at smalltalking.net
Sat Nov 30 13:21:39 PST 2013
Hi James!
Thnks for the info! Is very helpful for me!
I'm gone work with this next week and I'm sure I'm going to keep asking more
questions J
Leo
De: glass-bounces at lists.gemtalksystems.com
[mailto:glass-bounces at lists.gemtalksystems.com] En nombre de James Foster
Enviado el: viernes, 29 de noviembre de 2013 12:06
Para: glass at lists.gemtalksystems.com
Asunto: [Glass] Fwd: GLASS performance & cleanup scripts
Hi Leo,
Welcome to GemStone. You have asked a number of good questions and we might
end up breaking them down a bit.
First, as to configuring the Shared Page Cache (SCP), the rule is simple:
get the most that you can afford (up to slightly more than the repository
size). With the free license the maximum size is 2 GB, but that should be
pretty good for your database.
The free license does not have a limit on the database size, so you can keep
growing.
The #'fullBackupCompressedTo' method does an object-by-object backup that is
not identical to the extent0.dbf file in layout (it has all the objects, but
is typically more compact since the extents can have free space). The
alternative is to suspend checkpoints, copy the extent(s), and resume
checkpoints. That will give you an extent backup that can be used directly
as a repository. Unless you have reason to prefer the extent-file-copy
approach, I suggest you stick with the existing backup (and practice a
restore occasionally).
Often one will choose to start a new transaction log just before taking a
backup. Then you don't have to wonder which transaction logs are needed to
go with a particular backup. I have seen scripts that delete transaction
logs that are older than the oldest backup that you would use for disaster
recovery (and you typically keep at least two backups).
Strategies for making reports faster typically involve indexes and/or
clustering. You can read about them in the programming guide.
Keep asking questions and let us help you get the system running smoothly.
Of course, at some point I might suggest you pay for consulting and/or a
license to have a larger Shared Page Cache. ;-)
James
On Fri, Nov 29, 2013 at 4:12 PM, <leo at smalltalking.net> wrote:
Hi all!
Recently I have began to work with GLASS, I have experience working with
smalltalk but no experience working with Gemstone & Linux, so be patient
with me :)
My client already have a GLASS system working on production enviroment, but
with some performance problems when they perform large reports, so I plan a
strategy of 3 stages:
1) Review the hardware
2) Review the Server & Gemstone configuration
3) Review the code
The first issue, the hardware, already check and I think its ok for a 15GB
repository and only 10 concurrent users. Is composed of a
-#HP ProLiant ML350 G5
-#8 CPU Cores (Intel Xeon E5405 @ 2Ghz)
-#8 GB RAM
-#3 146GB disks in RAID 5 (for storage) (bay 1-2-3)
The only thing here that I read in some post is changing the disk drives to
SSD but is not a quick decision here where I am :)
The second one, is where actually I am. I change the host SO, I mounted a
VMWare ESXi and then mounted a GemStone/S 64 Bit in a VMware Appliance to
it. Then I added a 2 more spindles for the repository and the tranlogs and
also increase the swap partition (see image attached).
Questions:
-My repository size & concurrent users dont fit to pre-Selected
configuration (repository size is for a Medium conf and users fit with Small
one), so what is a recommended main variables for the config file according
to my repository values?
-Repository size: My repository is 15 GB, I read somewhere that 16 GB is a
limit. Why is that? Is recommended in these case to split in 2 repositories?
-Estimating Shared Page Cache issue: I read here:
http://programminggems.wordpress.com/2012/04/06/configuring-shared-memory/
and then read also the Gemstone installation guide documentation
(http://community.gemstone.com/download/attachments/6816862/GS64-InstallGuid
e-Linux-3.1.pdf), but the calculation are different, so Im confused about
it. What are the things I have to consider to properly configure shred parge
cache?
-Cleanup scripts: I read here to automate a cleanup scrip, but also I find
differents approach with others scripts:
a) The backup make with #fullBackupCompressedTo: make a gzip file that when
decompress have no extension (is a extent.dbf copy no?). The other approach
tell my to make a direct copy of the extent (cp command).
b) The tranlog clean up script is cheking the files datetime but in other
post (http://forum.world.st/Translog-space-problem-td2403379.html) I read
about the #oldestLogFileIdForRecovery that tell me the oldest tranlog needed
for a recovery, is more convenient to make a script with that info?
Thnks in advance!
Leo
_______________________________________________
Glass mailing list
Glass at lists.gemtalksystems.com
http://lists.gemtalksystems.com/mailman/listinfo/glass
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gemtalksystems.com/mailman/private/glass/attachments/20131130/f12f4333/attachment.html>
More information about the Glass
mailing list