The reason you are observing this behavior is the in-memory cache used by Sitefinity, which is separate for each of your servers. Although the DB is shared and both servers read from it, if something has already been open, then each consecutive request is served from the cache.
The scenario you describe (distributing load between different servers) falls into the category of Load Balancing. We have support for this in Professional edition and higher, and you also buy an add-on for Standard edition. More information on how to enable load balancing in Sitefinity can be found in our documentation:
Let us know if this doesn't work for you.
Exactly. Setting up the sites as such running in NLB will fix the issue.
The way to setup this on 6.0 is:
1. Setup an approval workflows on your 1st website
2. Create an item on the first site, Send it for approval. Until this item is approved and published, it won't be synced to site 2, even if you execute a Sync operation.
3. Someone on the first site approves the item (publishes it). Now if you sync the content, it will appear on the second site as published.
There is also another approach I can recommend. To secure your backend, you can use a whitelist. You just need to add a rule that excludes /Sitefinity (but does not exclude /Sitefinity/Public/Services) for all external IPs and this way nobody will be able to access the backend, but IPs from the whitelist.
You are correct - the scheduling options are only the ones who you see and you can have only one schedule at a time.
Apart from SiteSync, other options that you might utilize are:
- deploy your database from staging to production, given that you don't have any data that comes from live (like forums, comments or statistics). If this is the case, then deploying the database would work perfectly for you. It can even be done without downtime - if you are in an NLB environment
- Use two Sitefinity instances that are load balanced, similar to what is described in the beginning of the thread, though this is not a very common practice. One of nodes will be public and the other one would be hidden and used only by your content editors. On the public node, you can disable the backend access with a firewall. Important thing is that the two instances should be able to see each other's servers, so that they can invalidate their cache.