I have intalled open edge 11.1 on a 64-bit windows server 2012 R2 hyper-v machine and create a database and when I start to load the table contents to the database what i have realized it is too slow, for example a 170000 record file took to load on a normal machine about 10 minutes while the same file on the hyper-v machine in took about two hours. why? is there any configuration to be done on the open edge side or on the hyper-v side to enhance the performance?
There could be a hundred different reasons. Since you don't detail HOW you loaded the data (binary? dict? bulk?, single-user? multi-user? DB params?) I will assume you did it the same way on both. This means that most likely it's simply a question of I/O throughput: your virtual machine simply has less than the "normal" machine.
Other possibilities are that the other VMs are consuming the resources of the physical machine, leaving nothing for you; or that you simply did not attribute enough resources to your VM.
There could be a hundred different reasons. Since you don't detail HOW you loaded the data (binary? dict? bulk?, single-user? multi-user? DB params?) I will assume you did it the same way on both. This means that most likely it's simply a question of I/O
throughput: your virtual machine simply has less than the "normal" machine.
this post as spam/abuse.
So, you are loading across the network and expecting it to be fast?
Consulting in Model-Based Development, Transformation, and Object-Oriented Best Practice http://www.cintegrity.com
Monae: There is no way we can help you unless you post information. What is the difference between the old and new machine? How are you loading? The VM has 10 GB of RAM, ok, who is using it? By a "500 hardisk" I presume you mean 500 Gb? But what is it underneath? Is it a lun carved from a SAN? Is it a physical disk? What?
And what did you do to load? Your statement "the load is done from client" tells me very little. TMH asks if it's across the network but maybe the client is local on the VM. We don't know because you didn't tell us.
need much more data.
how much memory is in the real machine ?
in addition to the 10 GB of RAM, what are the other VM configuration settings ?
please describe your load procedures.
Thanks for your help, I did another scenario to be more precise. I have run same report on the old sun machine with 2GB RAM and no VM with 75GB disk, and same report on the 2012 R2 hyper-v machine with 20Gb RAM 500Gb disk space one 4 core with 2.0 GHZ cpu and I am using the whole machine alone because it is newly installed and not used
also using same network on both. the report on the old sun machine took 15 min. while on the VM it took 30 min.
2 cpus's 4 core each 2.0 GHZ, Real memory is 96GB we are using SAN , we have for VM'ss and my VM has the highest resources . I did another thing than load i prepare same report on the old sun server and the new VM which has much much more resources than the sun on the old sun it took 15 min. on the new VM it took 30 min. using same network
Your load speed will also be affected by database configuration: which database license is in use (Enterprise or Workgroup), BI buffers, BI block size, BI cluster size, which helper processes are running, your DB structure, etc. That information would be helpful.
Let's stick to one issue at a time. The report execution time is affected by another 100 factors and it will get confusing if we try and address both the original write issue and the new read issue in the same thread. I understand that there is likely overlap in the causes of both issues but it will just get confusing trying to address both.
This is what we know and what we do not know:
1. Old machine = some Solaris box with 2Gb Ram (seems unlikely) and 75 Gb HD
2. New VM: Win2K12 20 Gb RAM
2.a) HDD has a capacity of 500 Gb but as I mentioned previously, I DO NOT CARE about capacity. I care about I/O throughput
3. You are loading data and it is slower on the Windows VM
3.a) We DO NOT KNOW how you loaded data. C/S? Dictionary? Bulk? Binary?
For the report:
1. Takes twice as long on Windows
2. We DO NOT KNOW DB startup parameters and client connection parameters on Sun vs Windows
Long shot, but worth mentioning, if you have moved from Unix to Windows and you are using shared memory connection to the server (from a local client), bear in mind that configuration is different, and what it would be using shared memory in Unix might not be using it in Windows. Meaning that the performance speed would be based on network traffic.
If this is the case, could you post how do you connect your client in Unix and Windows?
I use same startup parameters on the sun solaris and hyper-v machine, concerning the load of the data i did not use bulk load i use the traditional data dictionary load utility. Do you have any startup parameters to use on my hyper-v to get good performance? if yes what are they?
on both windows and unix my users are windows base and i use the client networking to connect to the database using host and port number
Workgroup or Enterprise DB OpenEdge licence on the server? Assuming Enterprise., try the following on the Windows server:
1. Open a proenv command line (Start - Programs - OpenEdge Proenv)
2. cd to the DB directory
3. Stop the db (proshut db) or use OpenEdge Explorer if that's how it's configured
4. _proutil db -C truncate bi -bi 16384 -biblocksize 16
5. If using OE Explorer, make sure to enable one APW and the BIW
6. Start the database and repeat load test
7. Report back the results.
This is still not the fastest way but it's tough to teach a full dump&load class in a discussion forum. You should consider signing up for some DBA training. What part of the world are you in?
pkoufalisThis is still not the fastest way but it's tough to teach a full dump&load class in a discussion forum. You should consider signing up for some DBA training. What part of the world are you in?
I'm sure that some of the DBA pro's around here (I'm not one of them!) are available for onsite or remote consulting services.
Architect of the SmartComponent Library and WinKit