how to calculate the value for -B and -L start up parameter

Posted by aaravindhantj on 21-Jul-2014 08:18

sir

How to calculate the value for -B and -L start up parameters in database. i know we can set the value minimum 1 lac and  maximum upto 5 lacs but i do not know on what basis we assign or calculate the value  for -B and -L. expecting the reply

thank you sir

All Replies

Posted by Paul Koufalis on 21-Jul-2014 08:26

Hi,

First, it is always a good idea to tell us about your environment when posing such questions.  What version of Progress? How many concurrent users? How big are the databases? What kind of application: ERP? WMS? CRM?

There is no such calculation as it is completely application dependent.  You could have a 10 concurrent user system that reads and locks millions of rows or a 1000 concurrent user system that reads and lock millions of rows.

100K is a good round number for -L.  On modern versions of Progress I believe each lock-table entry consumes less than 100 bytes, so that's no more than 10 Mg of memory.  

For -B, you'll find a lot of articles saying "10% of your DB size".   Since we have ZERO information about your site, start there and monitor your hit ratio.  One important note though: a small increase in hit ratio requires a LARGE increase in -B.  If you start at 10% of DB size and see your buffer hit ratio is 95%, double the -B value if possible.

Posted by Abri Venter on 21-Jul-2014 08:37

Hallo

Depending on some factors I would also introduce -B2. Rich Banville had a good talk at Exchange 2013 if I remember correctly.

Just remember everything you put into -B2 should fit without having to swop at any time.

Thank you

A Venter

Posted by Rob Fitzpatrick on 21-Jul-2014 12:33

When you say "minimum 1 lac and maximum up to 5 lacs", does "lac" mean 100,000, i.e. 10^5?  If so, those are not the minimum or maximum values for either -L or -B.  If not, can you please clarify?

Regarding -L, I tend not to change it from the default as it is sufficient for my needs.  However if your application has many users or your application code has transaction-scoping issues then your needs may be greater.  If you see errors like "Lock table overflow, increase -L on server (915)", then you need to increase -L (and possibly also fix your code).

Regarding -B, as Paul said it is difficult to say what is right for you, or even what is possible given your version of Progress and your application.  In short, you want to allocate the largest amount of buffer pool that you can that contains the working set of your application.  The working set would be the portion of the data in your database that your application frequently reads, i.e. the data that your application would benefit from having memory-resident.  You could have a large database with a large working set (your application reads a lot of that data) or a small working set (your application reads only a small portion of it).  The maximum size of the working set is the size of the database, so you gain no benefit from allocating a buffer pool larger than the size of your database.  The size of your buffer pool in bytes is the value of -B times the database block size.

Also, be aware of your limits.  Using a 32-bit RDBMS license you are limited to the allocation of less than 2 GB of buffer pool.  And if you have a federated application (i.e. several databases) then you are further constrained, as any shared-memory client must be able to map all of the shared memory segments of all the attached databases into its virtual memory address space.  With a 64-bit RDBMS license you don't have that practical Progress limitation but you could still be affected by the OS shared memory limit.  In Unix, check ipcs -l for limits on total shared memory size and individual segment size.

How do you know if you have enough buffer pool, i.e. if adding more won't help you?  One metric to look at is empty buffers in the promon Buffer Cache Status screen (promon r&d | 1 | 7).  If your application has been running long enough to have completed a typical business cycle (however you define that) and you still have empty buffers then increasing the size of -B probably won't help.  I say "probably" because someone could always run a new or different program that changes the working set of the application, which in turn changes your caching needs.  

Note that some database utilities read a large portion of your database, or even all of it, like probkup online.  This means all of that data gets loaded into the buffer pool, whether it is later useful for your application or not.  For utilities like this I allocate a small amount of private buffers (e.g. -Bp 10) so the utility doesn't "foul" the buffer pool and evict data that is useful to the application.

Finally, bear in mind that buffer hit percentage is a very coarse metric and doesn't tell you everything you need to know about caching.  A low number is definitely bad and tells you that you would benefit from having more cache, but a high number doesn't necessarily tell you that all is well.  For example, a 99% hit percentage tells you that one out of every 100 reads is a cache miss and results in a disk read.  Although 99% sounds very good, if your total reads is a very large number then 1% of that (your cache misses) is also a very large number.  In other words, even with a very high buffer hit percentage you may still benefit from increasing the size of the buffer pool.

Posted by gus on 22-Jul-2014 09:00

in theory, a detailed analysis of your application and the data in the database could allow you to calculate optimum values for the -B and -L configuration parameters. in practice, it cannot be done because there are too many variables. you must find suitable values empirically.

for -L, pick a number like 100,000 and then run your system a while. use promon program to look at the lock table high water mark. that is the maximum number of locks that were in use all at once. if you are near the amount of locks you allocated, then increase -L. allow some extra for expansion. As long as you don't run out, you have a suitable value.

for -B, as long as you have adequate performance, you have enough. but do as Rob suggests. note also that if you do /not/ have adequate performance, it may be caused by something that has nothing to do with -B.

Posted by Rob Fitzpatrick on 22-Jul-2014 10:45

Gus,

Are there application performance impacts for shared memory clients from greatly increasing the size of the lock table, e.g. more time spent reading the larger number of records?  

Posted by gus on 22-Jul-2014 10:50

No.

the number of records you read is determined by how many there are and what queries your code executes, not by the size of the lock table.

making the lock table larger consumes more memory (72'ish bytes per entry these days). that in itself is not detrimental unless you begin to incur paging I/O.

reading the lock table via for each_lock will of course take longer but that is not a normal thing for an application to be doing.

This thread is closed