Doing some performance testing and tuning, and we need to flush the buffer pool prior to running the test after making code changes. Does anyone have any good ideas on how to do this without restarting the database broker or reading another set of unrelated records?
Buffers aren't flushed unless they're needed to store new data being buffered, based on the LRU algorithm. So one option is to read enough data, given the size of -B, that you completely flush any existing data. "Enough" may be difficult to calculate exactly, as it depends on a few factors, like whether you're using Type I or II storage areas, and how physically scattered your records are. An easier option would be to run a dbanalys, which would read all of your data. Again, this assumes that is enough to turn over your buffer pool.
Given that you don't want to load new data into the buffer pool, I would suggest restarting the database. I know you don't want to do that either, for some reason. But if you want repeatable test results I think the best approach is to start from the same starting point with each test iteration.
I.e., doing anything that would actually flush the pool would take as much time as just restarting the database and would be less certain.
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
Restarting the database is the best way to ensure a consistent state in the -B datbase buffer pool. However, remember that things like crash recovery etc can affect what is in the buffer pool after a db restart so a truncate bi and bi grow operation should be applied first.
The cost of putting somthing into the -B buffer pool can vary greatly based on where the db block is coming from. If the blocks you need are located in the file system cache, the cost of paging in a buffer to the databse buffer pool is much less than if a physical disk read is required. Ensuring that the file system cache is in a consistanst state can also be a challenge when doing performance tests at this level. Rebooting the system is the most consistent way. Some have mentioned performing large file copies to clear the file system cache but I don;t know how consistent that really is.
A trick that you could do to ensure a more consistent environment is to re-start the database and run a dbanalysis on the database between each performance test. This will not include the initial page in of each block in your performance numbers (since that would have been done by the dbanalysis) but will give you a more consistent set of numbers to work with. Additionally, the start up cost of populating the buffer pool is typically not as important as the runtime cost of managing a full buffer pool based on your application's use of the database.
Thanks all for your comments - all make sense and were pretty much what I was expecting. I was kind of hoping that there was some undocumented feature, similar to EMPTY TEMP-TABLE!
You can flush the system caches by dismounting and remounting the drives that house the database.