on gluing a bunch of tools together, especially if they rely on different repositories.
Editors, compilers, etc., that is, the personal tools, are not so much an issue with distributed development. The key reason is that they generally don't share corporate data, but instead work within the workspace or sandbox.
So what else is there to look out for?
Well, first of all, is there a lot of administration to setting up a new site? If so, something can and generally will go wrong. Not only do you want to have low administration, you also want flexible timing. Can I create a synchronization point on day 1 and not deploy the site until day 14 when I'm physically there? Is it easy to add a new site a few weeks or even months down the road without having to create a new synchronization point? And coordinating complex administration across long distances can be tricky.
Next, does the solution continuously verify synchronization of data? What happens when your repository or tools are upgraded? If someone makes an upgrade mistake at one site, the data might drift out of synch. How soon will you find out? And even more important, how easy is it to recover? It's likely easier if you find out sooner. Because you're dealing with multiple sites, an automatic recovery feature is sure a bonus.
Also consider whether or not your solution requires that each update/transaction is processed at the master prior to slave sites. If so, a large transaction can impose significant delays to the slave sites. The master should only have to order the transactions, and perhaps help ensure they are distributed to all sites for asynchronous execution at each site.
Another factor: How difficult is it to upgrade your repository and tools? If new release upgrades are painful, then a replicated data solution will have painful periods. This is complicated further if your solution is a composite of tools, even if they run on a single repository.
If you identify a single-repository, end-to-end integrated solution with low administration and automatic problem detection and correction, what you'll notice is that distributed development is easier than you thought. Yes you still have to pay attention to management and processes, but your solution gives you significant support here. Yes you still have to ensure you have good communication, but again your solution will give you great support, especially if it supports the review process and electronic approval.
If you have reasonable bandwidth between sites, you may also find that you can connect to a remote site without realizing that you're not connected locally.
A warning though. Don't take technology at face value, especially when it's the face of a salesman. Bring it in to evaluate your distributed development on your local network. If this is a difficult process in itself, then beware. Test out some real use cases, from all sites. Pay attention to administrative needs, to scalability and to reliability. Force errors and check out the recovery procedures. Load in 10,000 files and see what happens.
There are always going to be issues - Very Large Files across slow links, network outages, etc. But in my experience, distributed development with a single repository and an integrated management suite can be pleasant, and is well within the reach of today's technology.
The measure of success is always going to be: How different is this from a single site scenario?
And if you need a recommendation, don't hesitate to ask. I've been using this technology for years and would be more than willing to discuss it