Remote desktop shared memory maximum users - opinions

Posted by marcballesteros on 30-Oct-2017 11:49

Hello.

We have an Openedge 11.6 developed application running on Windows with adm2 technology. Normally, we have problems with the performance on clients computers.

We are considering work only on servers with shared memory mode. In this way, our application works significantly better.

But I have a discussion with one collegue about put all the users to work on the same database server and RDS server. All in one only computer. I have the opinion that on the database server is not secure in terms of possible malware and lack of fails redundance if a lot of users are connected on the same computer and is better to use a separate Terminal server machine.

My collegue sais that no problem connecting all users to the same database server remotely, independently of the number of users.

I want to known your opinions about the maximum number of users connected remotely on the same database servers (Our customers are not very big companys, about 30 -70 users, and interact with Outlook, Excel, etc).

And I want to ask recomndations to improve the general application performance that not envolve change our code on client /server mode.

Thanks.

Kind regards.

All Replies

Posted by James Palmer on 30-Oct-2017 12:17

This is purely my opinion, but I would never connect a user session shared memory. If they crash holding a lock then there's a good chance they will bring the databases down as well. It's just not worth the risk.

We have clients who have a number of RDS servers load balanced to connect to the databases over the network. Our largest customer has around 120 users on 4 RDS servers. They have no problems at all with performance on the Progress side. They have some performance issues because they cut costs in terms of hardware, particularly disk, but in terms of the application itself it doesn't bat an eyelid with that many users.

If you're experiencing performance problems, it's important to isolate the causes. There's no point tuning the database if the network is the bottleneck. There's definitely no point tuning the database if code is the problem.

There's a very good session on Friday at EMEA PUG Challenge (www.pugchallenge.eu/.../program-details) if you're interested in improving the performance of your application, and working out what the bottlenecks are. In fact there's probably a lot of content that will be relevant, but be quick, it's only 2 weeks away!

Posted by Rob Fitzpatrick on 30-Oct-2017 12:18

You will get the best client performance from having them connect self-service (shared memory).  But as you note, if you make your database server into a terminal server, that brings its own risks and challenges.  Such a configuration puts more load on the database server and takes OS resources away from the database, making it more challenging to tune.  It means running Windows, with anti-malware software and other overhead.  It may mean giving shell access to all of your users, if you are doing desktop virtualization.  Personally, I prefer not to use Windows as a DB platform, though many of my clients choose it as it is more familiar for them to administer.

As I said, out-of-the-box client/server performance may be much less than self-service.  That said, you shouldn't be accepting the configuration defaults for client startup parameters or for client/server-related database broker startup parameters.  With proper tuning of parameters like Mpb, Ma, Mi, Mm, prefetch*, etc., combined with use of jumbo ethernet frames, you may well be able to get client/server performance that is good enough for your needs.

You ask about general performance-tuning recommendations.  It is hard to say without really knowing anything about your environment or application.  But in general terms:

- use 64-bit OE (at least on the back end)

- use the Enterprise RDBMS

- ensure you have good storage hardware and sufficient RAM

- leverage caching wherever you can, through appropriate settings for -B, -B2, -Bt, -tmpbsize, -Bp, -mmax, -bibufs, -aibufs, -omsize, -ecsize, etc., to minimize physical I/O

- ensure your database has a modern structure (all application data in Type II areas, one object type per area, appropriate RPB/BPC settings)

Performance is a big subject.  It would be helpful to know more about your environment and any specific performance issues you have today.  Also, the KB is a good resource (knowledgebase.progress.com).  It has several articles about database performance, client/server performance, query tuning, etc.

Posted by ChUIMonster on 30-Oct-2017 14:31

FUD aside, running hundreds of users with shared memory connections is very common in the UNIX world.  It is not all that risky.  And it does indeed perform much better than  client-server does.  You have to avoid "kill -9"  to avoid the dreaded crash while holding a latch but that really isn't all that hard.  And the benefits are substantial.  (It is also cheaper because you do not need a "client networking" license...)  The main risk is know-nothing sysadmins whose approach to all problems with the "kill -9" hammer because they mistakenly believe that "it always works".  If your company employs such a person you might want to ask HR to update their hiring matrix to ensure that the next sysadmin is a bit more qualified.

Windows is challenging because there are many things that happen in the Windows world that are effectively a "kill -9".   Like killing a process with task manager.  Or having your background jobs killed for you when you logout.  So there is reason to be concerned if you must live in that world.

App servers are intended to provide a way to get the best of both worlds.  If your ADM code makes use of app servers then it might be sufficient to move those to the db server and run them as shared memory connections.  That  avoids the end-user misbehavior problems while gaining the benefits of shared memory connections.

If your code does not use app servers and you don't want to change that you're kind of stuck.

Rob's general points about tuning are a good starting place.  There are also a bunch of useful things that can be done with regards to the client/server connections themselves.  -Mm and -prefetch* are good starting points.

If you need more help I know people who would be happy to help with the details on a professional basis ;)

This thread is closed